Hacker Newsnew | past | comments | ask | show | jobs | submit | sanxiyn's commentslogin

As I understand, Rice's theorem does not apply because neural networks are not Turing-complete.

I'm not sure I agree with that. Even technically, my PC is not Turing-complete because its hard drive is finite. Yet there is an informal sense that Rice's Theorem is still relevant in a kind of PC abstraction sense, as we are all taught "virus checkers are strictly speaking impossible". This is a subtle point that needs further clarification from CS theorists, of which I am not.

Neural networks in general are Turing models. Human brains are in the abstract Turing complete as well, as a simple example. LLMs being run iteratively in an unbounded loop may be "effectively Turing complete" for this simple reason, as well.

Regardless, any theory purporting to be foundational ought to explicitly address this demarcation. Unless practitioners think computability and formal complexity are not scientific foundations for CS.


But most "normal" neural networks are feed-forward, so they are guaranteed to terminate in a bounded amount of time. This rules Turing completeness right out. And even recurrent NNs can be "unfolded" into feed-forward equivalents, so they are not TC either.

You need a memory element the network can interact with, just like an ALU by itself is not TC, but a barebones stateful CPU (ALU + registers) is.


I already addressed this type of misargument in my first paragraph. Another way of looking at it is, if NNs are so time bounded then they cannot be computationally powerful at all. Which is really strange.

To some extent yes, but models are good at reverse engineering such that it isn't as great advantage as you might think.

Last three CVEs are collections of bugs. CVE-2026-6784 is a collection of 55 bugs. CVE-2026-6785 is a collection of 154 bugs. CVE-2026-6786 is a collection of 107 bugs.

As for credits, I think bugs are ultimately credited to people, and this time Mozilla people used Mythos, as opposed to Anthropic people using Opus or Mythos.


Why do you think so? I didn't get such impression.

The gap between formal and informal has been pointed out as an Achilles' heel of formal methods from the dawn of the field, so critique is not particularly new. The standard reference is Social processes and proofs of theorems and programs (1979), which is worth reading.

Nice; thanks for the pointer to the paper (i had not known of it).

The key to understanding and usage of Formal Methods is to realize that it is a way of thinking at many different levels. You can choose whatever level seems intuitive and accessible to you.

The must-read paper On Formal Methods Thinking in Computer Science Education posits three levels which i have highlighted here - https://news.ycombinator.com/item?id=46298961

Carroll Morgan's classic (In-)Formal Methods: The Lost Art --- A Users’ Manual and his recent book on the same are also relevant here - https://news.ycombinator.com/item?id=45490017


Well, that's because this post is about Gen 13. In FL2 post (presumably on same Gen 12 servers), they say 25% lower latency.


In this case the code is public and you can see they are not cheating in that sense.


The harness seems extremely benchmark specific that gives them a huge advantage over what most models can use. This isn't a qualifying score for that reason.

Here is the ARC-AGI-3 specific harness by the way - lots of challenge information encoded inside: https://github.com/symbolica-ai/ARC-AGI-3-Agents/blob/symbol...


I agree it's not cheating that restricted sense. But I'm not really convinced that it can't be cheating in a more general sense. You can try like 10^10 variations of harnesses and select the one that performs best. And probably if you then look at it, it will not look like it's necessarily cheating. But you have biased the estimator by selecting the harness according to the value.


Once the model has seen the questions and answers in the training stage, the questions are worthless. Only a test using previously unseen questions has merit.


They aren't training new models for this. This is an agent harness for Opus 4.6.


All traffic is monitored, all signal sources are eventually incorporated into the training set in one way or another. The person you're responding to is correct, even a single API call to any AI provider is sufficient to discount future results from the same provider.


ok! So if someone uses an existing, checkpointed, open source model then the answer is yes the results are valid and it doesn't matter that the tests are public.


Yes, assuming the checkpoint was before the announcement & public availability of the test set.


You live in a conspiracy world. Those AI providers don't update the models that fast. You can try ask them solve ARC-AGI-3 without harness and see them struggle as yesterday yourself.


Which part is the conspiracy? Be as concrete as possible.


They are definitely cheating, they have crafted prompts[1] that explain the game rules rather than have the model explore and learn.

1. https://github.com/symbolica-ai/ARC-AGI-3-Agents/blob/symbol...


Where do you see that? I only skimmed the prompts but don't see any aspects of any of the games explained in there. There are a few hints which are legitimate prior knowledge about games in general, though some looks too inflexible to me. Prior knowledge ("Core priors") is a critical requirement of the ARC series, read the reports.


There is. Official leaderboard is without harness, and community leaderboard is with harness. Read ARC-AGI-3 Technical Paper for details.


I went through the technical paper again, and while they explain why they decided against the harness, I disagree with them - my take is that if harnesses are overfitting, then they should be penalized on the hidden test set.

Anyway, searching both in ARC-AGI's paper and website and directly on kaggle, I failed to find a with-harness leaderboard; can you please give the link?



Ah, it's based on this repo [0] and there's only 1 non-example submission there [1], from 2 weeks ago (so it only covers the preview games), and their schema doesn't a field to show that it's only the preview, nor does the thing properly parse the score or cost into the table. And the biggest thing is that apparently there's no validation whatsoever - submissions are not ever run on the hidden test games, so is essentially useless as a comparison.

[0] https://github.com/arcprize/ARC-AGI-Community-Leaderboard [1] https://github.com/arcprize/ARC-AGI-Community-Leaderboard/bl...


That works well. Anthropic wrote a writeup on it.

https://www.anthropic.com/engineering/harness-design-long-ru...


Yes, but a programming language with a proverbial sufficiently smart compiler. That is very useful.


Try writing an exhaustive spec for anything non-trivial and you might see the problem.


Been saying this for a while now. I work in aerospace, and I can tell you from first hand experience software engineers don't know what designing a spec is.

Aero, mechanical, and electrical engineers spend years designing a system. Design, requirements, reviews, redesign, more reviews, more requirements. Every single corner of the system is well understood before anything gets made. It's a detailed, time consuming, arduous process.

Software engineers think they can duplicate that process with a few skills and a weekend planning session with Claude Code. Because implementation is cheaper we don't have to go as hard as the mechanical and electrical folks, but to properly spec a system is still a massive amount of up front effort.


And software isn't as constrained by physics as hardware, which massively expands both the design space as well as how many ways things can go wrong.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: