Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not really, for simple apps you can use what we already have in place with some meta-programming rules prepared by humans (currently only a few companies posses this capability though). You can use ML like deep learning variations to learn association between your wishes and corresponding code blocks. Initially apps like that would be simple, i.e. "make a web page", done, "change background color to pink gradient", done, "place gallery of images from my vacation to the center", done, "show me the photos", "remove this one", "make this one smaller", done, "add this title", "add my paintings underneath", "add a buybox next to my paintings", add "checkout to the top" etc. NLP is now there already, but initially you'd need a lot of human "drones" for associating "wishes" with code - doable by companies like Google (they can bake in learning into Android Studio) or by scanning GitHub etc. We already have stupid mobile apps generators, I don't see any reason why we wouldn't have what I described within next 10 years.


I read a story years ago about a guy who changed careers from being a Programmer when Visual Basic was launched. He reasoned that anyone would be able to create applications so it wouldn't be a viable career anymore.

> You can use ML like deep learning variations to learn association between your wishes and corresponding code blocks

I suggest you read up on ML.


I am currently working on deep learning as well as generating custom programming languages. Maybe you could consider updating yourself? ;-)


Programming touches both comprehension of the code and the world it interacts with. Any program which can write programs based on comprehending natural language ought to be able to rewrite itself. Can you please explain how what you are describing is distinct from AGI?


I view current deep learning as a cumbersome counterpart to retina-level cells (well, from 30,000ft). Anyway, DL can roughly do what a few specialized biological neurons can do, like in the case of retina identifying direction, speed etc. It's far far away even from a whole brain of an insect, but it can do some amazing things already. Here what you can do is to utilize those things it can do (and they will be getting better), and add some human-made inference/anotation/ontology/optimization system on top of these partial components. The human-made system can be chipped away slowly as we figure out how to automate more and more of its functionality.

So for example, for simple programs you don't need to understand what individual code blocks are doing. All you roughly need are some well-defined procedures/visual components (able to ignore unsupported operations) that can be composed LEGO-style. Here AI can learn to associate certain compositions of code blocks with your sentences, e.g. you can teach it with touch what does it mean to resize, move to left, change color etc., and even provide those code blocks. To help it, you have to annotate those code blocks so that you maximize chance of valid outcomes. ML by itself is not capable of inference, so inference must be done differently. Yet what your AI learns with associating certain sentences with outcome in your code blocks will persist. And for making associations you can unleash millions of developers that might be working on your goal unknowingly, e.g. by creating a safe language like Go for which you have derived nice rules that you can plug into your system. Initially you could only to pretty silly things, but the level of its capabilities will be rising all the time and there is a way forward in front of you, even if a bit dimmed.


Sounds like something that could be used for customizing a CMS a little bit, but software development is something very different from what you're describing.

Developing software requires understanding of completely open ended natural language. NLP is nowhere near that level of AI and doubt that it will be in the next 30 years.


Yes, but when you talk to regular people and their needs for web pages, apps, they are often either very trivial or unbearably complex. You can potentially automate away those very trivial with the current state of ML already, and there is a bulk of money there that goes to a lot of independent developers and smaller companies. And once you have such a system built, you can extend it as new advances in ML/GPU come, automating away more and more in the process. Even if you just prepare some vague templates for often performed tasks in business with limiting variations, those can be super helpful.

The point is that only really good SW engineers have any chance to survive, those low-skilled ones will be gradually replaced by automated reasoning.


>Yes, but when you talk to regular people and their needs for web pages, apps, they are often either very trivial or unbearably complex

Mostly they are unbearably vague and based on tons of false assumptions about how things work. Separating the trivial parts of a user request from the unbearably complex parts is itself often unbearably complex. It requires a conversation with the user to make it clear what is simple or what could be done instead to make it simpler.

The examples for trivial user needs that you have given are all within the realm of what we now use WISIWIG editors for. Not even that is working well. The problem is that you can't layout a page without understanding how the layout interacts with the meaning of the content on the page.

The logic capabilities of current ML systems are terrible. It's like, great, we have learned to sort numbers almost always correctly unless the numbers are greater than 100000!

Even in areas where AI has advanced a lot recently, like image recognition, the results are often very poor. I recently uploaded an image I took of a squirrel sitting on tree branch eating an apple to one of the best AIs (it was Microsoft's winning entry to some ImageNet competition).

It labelled my image "tree, grass", because the squirrel is rather small compared to the rest of the picture. Any child would have known right away why that picture was taken. The tree and the grass were visually dominant but completely unremarkable.


Just imagine that you can interactively by voice or by touch tell AI what/how to adjust stuff and it will use it to improve itself for your future similar tasks. Now project there will be 1,000,000 users like that, telling app what exactly did they mean and pointing to proper places in the app. So exactly this will be the conversation you desire, you'd directly tell your app builder what you want, and if it is not doing what you like, you either show it to builder by simple gestures or rely on some other user going through the same problem before you and app builder taping onto that knowledge. Obviously, first for simpler web or mobile apps. This sounded like sci-fi just a decade ago, but we now have means to do simple app builders like that.

ML by itself is incapable of inference, hence you need some guiding meta-programming framework that could integrate partial ML results from submodules you prepare.

As for squirrel example, it was probably one of "under threshold" classifications of ResNet, i.e. tree was 95%, grass was 90%, but squirrel was 79%, so it got cut out of what was presented back to you. Mind you, this area went from "retarded" in 2011 to "better than human in many cases" in 2016. I know there are many low-hanging fruits and plenty of problems will still be out of reach, but some are getting approachable soon, especially if you have 1M ML capable machines at your disposal.


>Now project there will be 1,000,000 users like that, telling app what exactly did they mean and pointing to proper places in the app. So exactly this will be the conversation you desire

That's not a conversation, that's a statistic. A conversation might start with a user showing me visually how they want something done. Then I may point out why that's not such a good idea and I will be asking why the user wanted it done that way so I can come up with an alternative approach to achieve the same goal.

In the course of that conversation we may find that the entire screen is redundant if we redesign the workflow a little bit, which would require some changes to the database schema and other layers of the application. The result could be a simpler, better application instead of a pile of technical debt.

This isn't rocket science. It doesn't take exceptionally talented developers, but it does require understanding the context and purpose of an application.


Sure, but AI listening to you can be exactly that conversation partner, maybe by utilizing "General's effect" - i.e. just talking about some topic gets you to solution, even if the person next to you have no clue and just listens to you. Here AI can be that person and you can immediately see the result of your talk in a form of changing app you are building, and easily decide something has to be changed. Initially granularity of your changes will be large, i.e. the pre-baked operations will be simple. Later you can get more and more precise, as AI will be developing, and more and more people will be contributing more specialized operations.


Yes, some are very easy to implement, just look at squarespace's customers. One could build an NLP interface (bot?) to configure squarespace sites and this would take you quite far. Not sure I'd call that AI though.


It's a good start ;-) To get better you'd then have to enhance your meta-programming abilities as you see new possible cases opening in front of you. Will see how far would this go soon.

It's not general AI but even less-than-general AI can erode our ability to earn money from developing software.


Drag and drop website building is a monumentally easier challenge and hasn't obsoleted programming.


Sure, but very few regular people have patience even doing D&D websites. Imagine though that you just take your phone, tell it "make a website", then look at it and say, "well, change background to this photo", "hmm, place a gallery from a wedding there", "make a new page linked on the left and call it 'about me'" etc. And you see the changes happening immediately after you say it, and you can even correct it. This is doable today, you just need to have a set of "code blocks" that would allow you to generate proper web app and via engines like phonegap even mobile apps anyone can make.

Imagine you run a small business and need just some simple site with your contacts, and you are able to assemble it with voice in 10 minutes. That would be a complete game changer for most regular people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: