Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Preface: I'm going to use a really loose definition of AI here for a minute, which includes any computer agent that makes decisions a human might make. So, your alarm clock is an AI, the code that rejects your credit application is an AI, but the code that decides how exactly to paint this letter C is not. Now on to my comment:

I think children should learn to think about programs at a young age, because understanding AIs will be equally as important as understanding other humans in the coming century.

We will see more and more interaction with AIs who have powerful capabilities beyond what most humans can do. We will start to see AIs with emotional palettes and developmental trajectories that will allow them to integrate with us socially. But also, the little AIs like the alarm clock and the credit check are already more and more common gating structures in peoples' lives.

As long as the majority doesn't think about programs too much, you can ignore them too. But as more and more people program, like reading it will become more and more of a disability not to. Being able to think "natively" in software is going to be important. Not mandatory, but important.

Why isn't that employment AI you met finding any construction jobs for you? Does the physician AI really understand the probability that your Dad is NOT, zero probability, going to cut back his saturated fat? Your gardening AI (which is crossed with several other AIs that help you manage your household) says to plant corn, but your neighbors haven't yet, does it know something they don't?

If you can read code, those are questions you can ask.



I think you are severely underestimating difficulty of understanding non-trivial code. Even good programmers would need days if not weeks to understand how software like card processing AI works. And even when you understand the code behind it the AI like deep networks produce models that cannot tell to a human "how" it works.

So when a person uses a software component for real-life purposes I don't think there's a difference between someone who knows how to code and someone who doesn't - it's a black box.


> Even good programmers would need days if not weeks to understand how software like card processing AI works

I think there will be a shift in how we design software. Right now, software is mostly organized into monolithic packages which are about the maximum size a professional developer can make sense of in 40 hours per week. For the most part, all corners of the codebase are written with the expectation that they will be read by an expert in the language who understands the whole codebase. Lots of side effects, advanced control structures, etc. In order for programming literacy to take off, we will first have to start organizing some of our software for widespread human understanding.

Instead of one giant pro-level repository, you will have a toplevel layer which is mostly configuration code, but which is written in very domain-relevant language, using only simple programming primitives: functions and literals. It will be boxed into small, understandable modules, and editable in a browser like any simple document.

The next layer down will be generic algorithms and data structures, also written using simple programming primitives, and designed with very forgiving APIs, again using as much domain-specific language as possible and pushing non-domain specific implementation details down into libraries.

The third layer will be implementation-specific code, highly optimized, using the full spectrum of language tools and programming constructs. Only the domain of professional programmers.

Everyone, including children and executives will dabble in the top layer. Domain professionals (everyone in your organization who is not a full-time programmer) will work in the second layer, only in the part of the software that they specifically interact with in their work. Full-time coders will maintain the third layer, and will think of their role more as a support role, building tools to support the organization, rather than maintaining total control over the codebase.


So far there were multiple approaches that looked something like what you described: component programming, visual programming, "4GL" languages. Heck, even Steve Jobs wanted to sell software components. But these things failed. "Coding" won.


Nothing I described is "not coding". I described using a subset of existing language constructs for certain parts of a codebase, and structuring some interfaces in a certain way.

I certainly didn't say anything about visual programming.

4GL... I think you meant something else there? Ruby and PHP are 4th generation languages and are obviously doing quite well.

As for component programming... I think that's the closest historical precedent for what I'm describing so I'll go deeper there...

Unlike "component people", I don't think there is any single interface for high level domain-specific libraries. Every domain will be different. Objects are certainly not a panacea. And I have no illusions that domain-specific libraries would ever be automatically compatible with one another. I have no illusions of some universe of easily integrated components. I'm just talking about an isolated, high-level API on top of a single, internally consistent codebase.

A simple example, instead of a repo with a bunch of configuration data and a "start" command, build a service as a library without any deployment specifics, and then have a separate repo that has a simple script that uses that library to set up a specific instance.

Or, build a site as a library without any content, and then have a separate repo that just binds in the content to the app, so that anyone could play around with the content without having to dig through the implementation details, and with a smaller chance of breaking something.

It's just about separating the part of the codebase that vaguely makes sense to non-engineers from the part of the codebase that makes zero sense. I'm not talking about any kind of radical technological shift.


> 4GL... I think you meant something else there? Ruby and PHP are 4th generation languages and are obviously doing quite well.

I meant languages with integrated GUIs and databases: https://en.wikipedia.org/wiki/Fourth-generation_programming_...

> A simple example, instead of a repo with a bunch of configuration data and a "start" command, build a service as a library without any deployment specifics, and then have a separate repo that has a simple script that uses that library to set up a specific instance.

If the service does something useful then it must use things like databases and external APIs. So you need interfaces to abstract them - and they will be big and complex. It looks like you end with the "component problem"?


> Being able to think "natively" in software is going to be important.

We should not be working to prepare the people of tomorrow for solving the problems of today. When AI will be able to program as well as a common dev, then people will only need to guide it instead of doing the work by hand. People need to learn to work with AI, and current generations are very involved with its incarnations - AI game bots, web search, feed ranking, filtering, recommendations, auto-translation, voice commands, etc.

Current day aeronautical engineers don't need to solve by hand their equations, for them it is much more important to be able to use modeling and simulation tools. Similarly, in the future people will only need to function in relation to the AI of the day. Some aspects of how we used to do things will be lost, other will be gained. It will be a brave new world.


> When AI will be able to program as well as a common dev

I don't believe that will happen in our childrens' lifetimes for arbitrary programs. You are only thinking about half of the problem: the AI understanding code and available libraries. But the other half of what a "common dev" does is understand the person who is telling them what needs to be done. Half of the work of the programmer is to understand what their community actually wants despite their ability to articulate it deterministically. An AI being able to do that for all 6 billion people on the planet is not even theoretically possible at this time. We don't even have the anthropology, sociology, and psychology to model, let alone the computing resources and programmers to do it.

> don't need to solve by hand their equations, for them it is much more important to be able to use modeling and simulation tools.

Having an algorithm available in tool form doesn't remove the need to understand the algorithm. You need to know how it works to know when to use it and what the results mean. There are widespread problems in the scientific community with people using analyses that they don't understand and publishing false conclusions.

The only tools which are used widely and correctly are the ones which are designed well and taught well through widespread cultural practices, which is a tiny subset. Programming literacy will unlock orders of magnitude more algorithms than would ever be refined to that level by the software industry.


Ehh... Projecting how things will go in the "coming century" is in my view a silly endeavour. Modelling current policy based on those projections even more so. I say wait to see how this whole AI thing pans out before starting to teach kids about some hypothetical form of AI.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: