Hacker Newsnew | past | comments | ask | show | jobs | submit | sbuttgereit's commentslogin

There are a number of assumptions in what you say that don't necessarily hold.

1) That school is simply about landing a job.

2) That there is a value in students knowing how to have the AI do problems for them.

3) That follow-on effects of manually solving difficult problems is discountable compared to the direct output of the work.

I would say you're absolutely correct in that people pay for the result and they don't really care how you got there. But that's a pretty shallow rationale which overvalues the ability to be the conduit from the source of requirements to the final output and undervalues the individual ability to think for one's self when faced with the challenges of technological, geopolitical, or simply uncontrolled personal circumstances.

"The conduit", who you seem to be believe is the one with marketplace advantage, is exactly the person I would say is the most vulnerable. Not because getting the AI to produce demands is without value, but that its quickly becoming a task that doesn't need the intermediary at all. Those magicians that can prompt/agent/mcp/etc their way through to positive successes are actively being challenged by the very AI producers which our conduits people now depend on. Removing the need for intermediaries would be a great competitive advantage for any AI vendor able to achieve it. But insofar as intermediaries create output from LLMs, they'll not be very well differentiated: the common wisdom tends to be the output, lest the AI be accused of hallucination or being overly supportive. But when everyone is using AI for everything the opportunities will be in arbitraging that which is missed by common wisdom... filling in the cracks that any responsible AI would simply never venture to consider. Our conduit-person will be at a decided disadvantage because it takes real thought to know when it's best to color within the lines, and when it's best to not do so.

And that's really it. A good education is teaching you about the process of thought and becoming practiced at thinking. I would expect a better educated, thinking person to more easily adapt and make use of technology such as generative AI to solve problems more so than a person that just knows how to deal with today's prompting needs. The thinking person will be able to understand the bigger picture to better get a consistent and high quality series of results than the person just getting results as needed.

And that's really it. The output of a good education is you as a thoughtful & knowledgeable person: the output on the page is merely a means to that end. But if you focus solely on the answer on the page and the only important thing... you're really evaluating the AI, not the person that acted as intermediary.

In otherwords, if the person following your advice comes for a job, simply ask them which AIs they used in the interview and then just sign contracts with those vendors instead... you'll get better bang for your buck cutting out the middleman.


Looking at the discussion below this comment, I'd just add this video by AlphaPheonix:

https://www.youtube.com/watch?v=2Vrhk5OjBP8

Good discussion in the comments there as well.


I skimmed through this, and maybe I missed it... but what really are they trying to prove? Are they trying to show that AI is capable of arbitraging consumer desires vs. market products/services into a successful business? Are they trying to show that once you get to financially managing a business that the ruthlessly efficient demands of the AI can mean points to your margins? Or are they simply trying to get attention in an otherwise arguably overcrowded market for AI service s (maybe the AI suggested something like this)?

The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.

So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.


If I'm being charitable, it's more about the ability to orchestrate and resolve tradeoffs across these different tasks / domains? The overall C&C, presumably. Which is still not so surprising.

Really it's an excuse for the company to test all the harnesses and tools they have built to make it work.


i agree that some of these things we could have already guessed-- like yes agents can research stuff and order stuff off the internet. I think what will be a lot more interesting is the interactions that happen between Luna the agent running things and the employees it hired. I guess less about AI being able to do the procurement CEO level stuff, and more how it does the HR level aspects of store management. That seems more important in the log run, because like you said, we already know capabilities are there. I think what Andon Labs is doing is more about the safety aspect now. Seems that way at least with how transparent they are about Luna losing money and messing up lol

They're trying to get noticed so that a wealthy cult member's brain gets tickled to the tune of 9 figures

I think this really needs to be party of the message. It's great that Claude found a vulnerability that apparently has been overlooked for a long time. It's even proper for Anthropic to tout the find. But we should all ask about the signal to nose ratio that would have been part of the process. If it only was successful... That would be worth touting, too. But I expect there was more noise than they'd care to admit.

Or put another way, the context matters.


I have to agree with you. We don’t talk nearly enough about the real signal to nose ratio.

(Sorry. I couldn’t resist lol)


They've pretty clearly demonstrated the ability to get to orbit but have, quite reasonably, not actually put the thing into orbit. Given the size of the rocket they've been needing to demonstrate things like the relight for control after achieving orbit and have prioritized other issues like figuring out reentry.

So yes, you are literally correct in that they haven't put one in orbit, but it's more out of caution than capability. What they've only demonstrated in the most recent tests is that they have good reason to believe to believe that they can deorbit in a controlled fashion. But... now they've upgraded everything: raptor 3, booster v3, starship v3. Those need to prove out those capabilities again.

So I wouldn't be surprised if they continue the suborbital program for the next 3 or 4 tests. Given all the redesign, they aren't exactly at the beginning, but they have to show that they haven't broken what they previously fixed.


As others have mentioned, that's simply not going to tell you anything. AIS can and is often times turned off in such situations and it ships can spoof their location by sending false AIS... something that situations like could encourage, at least one could well imagine.

I find Sal Mercogliano's "What's Going on With Shipping?" to be a better source to understanding what's happening in the Strait. Here's a link to yesterday's episode "Strait of Hormuz 3-Week Recap | What is the Status of the Ships, Transits and Escort Mission?": https://www.youtube.com/watch?v=q64cOs7GN_4


"Someone once said ..."

Someone also once said that the Azores are the remains of Atlantis. I simply didn't put any credence in it.

While behavioral changes around a self-driving car are plausible; they're common enough now that, at least where I live in San Francisco, regular human drivers should be pretty well acclimated to them.


As someone that constantly reached for some sort of GUI/TUI tool to work with git repos, I can understand the hesitation. The normal git CLI is sufficiently opaque and my interest in it is sufficiently low that those tools were really the only way I could work done efficiently since they made the arcane ways of git a bit more evident.

The curious thing for me is that with jj I find that I don't ever really reach for a GUI/TUI anymore. At first I did try a GUI, but then I realized that the vast majority of what I want to get done conceptually didn't require it. So most of what I do now is just using the jj command line and very rarely do I reach for any sort of other tool.

The exceptions to the GUI/TUI use are resolving conflicts. For me being able to see the conflicts side by side and much more interactively choosing which I want is still more comfortable than simply hand-editing the file. And I also find I'm searching or chatting with an LLM anything I want to do something a little more advanced and less common, like rebasing all my feature branches on the most current mainline branch in one command... jj has a rich set of functions and pattern matching which I haven't (and may never) take the time to learn. But the majority of day to day interactions... just me and the command line. I would never say such a think using git by itself.


Is this anything similar at all to:

https://github.com/openai/symphony

I'm not very familiar with the space, I follow Elixir goings on more than some of the AI stuff.

It is curious... and refreshing... to see Elixir & the BEAM popping up for these sorts of orchestration type workloads.


It’s great to see OpenAI embracing Elixir - symphony is a direct implementation of the types of things Jido can do!


The automaton in the film is a "Pierrot" style of clown:

https://en.wikipedia.org/wiki/Pierrot

From the Wikipedia entry...

*His character in contemporary popular culture—in poetry, fiction, and the visual arts, as well as works for the stage, screen, and concert hall—is that of the sad clown [...]*

A modern "Pierrot" style clown is:

https://en.wikipedia.org/wiki/Puddles_Pity_Party

I have to admit... I wouldn't know about the Pierrot style of clown if it wasn't for Puddles...

https://www.youtube.com/watch?v=Cf1w5GUturU


The comment is not about the automaton, as a more careful reading of it will make clear.


Fair enough.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: