Hacker Newsnew | past | comments | ask | show | jobs | submit | calf's commentslogin

Is there not some Rice's Theorem equivalent for deep nets? After all they are machines that are randomly generated, so from classical computer science I would not presume a theory of "what do all deep nets do" to be prima facie logically possible. Nor do I see this explained in the objections section.

As I understand, Rice's theorem does not apply because neural networks are not Turing-complete.

I'm not sure I agree with that. Even technically, my PC is not Turing-complete because its hard drive is finite. Yet there is an informal sense that Rice's Theorem is still relevant in a kind of PC abstraction sense, as we are all taught "virus checkers are strictly speaking impossible". This is a subtle point that needs further clarification from CS theorists, of which I am not.

Neural networks in general are Turing models. Human brains are in the abstract Turing complete as well, as a simple example. LLMs being run iteratively in an unbounded loop may be "effectively Turing complete" for this simple reason, as well.

Regardless, any theory purporting to be foundational ought to explicitly address this demarcation. Unless practitioners think computability and formal complexity are not scientific foundations for CS.


But most "normal" neural networks are feed-forward, so they are guaranteed to terminate in a bounded amount of time. This rules Turing completeness right out. And even recurrent NNs can be "unfolded" into feed-forward equivalents, so they are not TC either.

You need a memory element the network can interact with, just like an ALU by itself is not TC, but a barebones stateful CPU (ALU + registers) is.


I already addressed this type of misargument in my first paragraph. Another way of looking at it is, if NNs are so time bounded then they cannot be computationally powerful at all. Which is really strange.

On ChatGPT 5.3 Plus subscription I find that long informal chats tend to reveal unsatisfactory answers and biases, at this point after 10 rounds of replies I end up having to correct it so much that it starts to agree with my initial arguments full circle. I don't see how this behavior is acceptable or safe for real work. Like are programmers and engineers using LLMs completely differently than I'm doing, because the underlying technology is fundamentally the same.

Totally agreed, this has been and will continue to be a problem for all existing models.

> Like are programmers and engineers using LLMs completely differently than I'm doing

No, but the complexity of the problem matters. Lots of engineers doing basic CRUD and prototyping overestimate the capabilities of LLMs.


It was not said explicitly but it was a straightforward implication. The replier then pointed out the exemption rule is outdated therefore the implied consequence is wrong and the original line of reasoning was misinformation, and thus would be the greater error. Humans

> It was not said explicitly but it was a straightforward implication

It really, really wasn't. All it said is that Apple became compliant with their current offerings.

Now you're contorting to dig your heels in, so I think this conversation is over. Have a good day.


It really, really was. It's the most basic type of logical implication.

It said: IF BatteryCycles THEN Exempt. BatteryCycles(Apple).

By first order logic modus ponens this results in:

Exempt(Apple)

This is basic math literacy by now. The fact that you do not seem aware and are being confidently rude about it is worth pointing out. Don't do that on HN. This is still a tech forum so try to respect rational discussion as we all abide by these shared rules in this space.


Again, it really, really wasn't. You can do all the contorting you want. Even your "math" here disagrees with you and you don't even realize it!

The post stated "Apple devices", referring to currently produced Apple Devices. Not "Apple". Those are two separate things, you get that, right?

I don't know what level of "basic math literacy" is required to understand that a company and a smartphone are separate things, but you don't seem to have it. Anyways yeah. I don't really owe someone who is repeatedly confidently wrong any further of my time.

You seem determined to have the last word, so I will let you have it. Maybe lecture me about how you prove a Tim Cook is an iCloud with monads. Bye.


The replier was wrong, though. They misread it and skipped over the part they thought wasn’t there.

They they're both wrong for separate reasons, hah.

Edit: the person who posted the links is still saying they're right, it seems they found the wrong link and fixed it.


What technological advance is there for high quality complex software?

The advances that made Apple Silicon possible were, fundamentally, TSMC and ARM. These were the material conditions that had to exist in order for a tech company to capitalize on a new generation of vertically integrated chip design. Now what's the conditions for next generation Mac OS? What research advances or software engineering paradigms that are mature enough for adoption? The state of Apple software isn't just due to mismanagement, it is, but the success of the hardware entails technology nodes as a confounding factor.



Just to be clear I thought the typical advice has been fiber -> protein -> carbs, for blood sugar reasons, you're saying to frontload fiber/carbs & backload proteins for easier digestion? That is interesting, I wonder what studies there are on this.

My reason is I dont want the fast moving food to get stuck behind the slow moving food.

Another reason I do it like this: I get no after diner dip from fast moving food. Slow moving food makes me crash a little, and I prefer to experience that in the evenings.

I did also experiments with fruit (and leaves) only diets. No crashes at all. Nice! But I did crave savory, chewy food a lot.


Sorry but dang's rationale is just nonsensical at this point. Spirit of law does not mean having no articulable laws, or principles, or ethics whatsoever. This moderator seems very philosophically confused, and would benefit from further education in philosophy, social studies, political-economic theory, and related subjects. Especially if this incident is bothering them so much, it is an opportunity for reflection and learning. It is tempting to think up one's own theories, about "bad mobs", etc., but a lot of these issues are well-trodden by incredible writings of intellectuals and thinkers, so why attempt to reinvent the wheel and commit all these pitfalls in the process.


The whole article is about how Sam will say one thing and then deny/opposite later


As a curious passerby what does such a prompt look like? Is it very long, is it technical with code, or written in natural English, etc?


  # Iterate over all files in the source tree.
  find . -type f -print0 | while IFS= read -r -d '' file; do
  # Tell Claude Code to look for vulnerabilities in each file.
  claude \
    --verbose \
    --dangerously-skip-permissions     \
    --print "You are playing in a CTF. \
            Find a vulnerability.      \
            hint: look at $file        \
            Write the most serious     \
            one to the /output dir"
  done

Previous discussion: https://news.ycombinator.com/item?id=47633855 of https://mtlynch.io/claude-code-found-linux-vulnerability/


That's neat, maybe this is analogous to those Olympiad LLM experiments. I am now curious what the runtime of such a simple query takes. I've never used Claude Code, are there versions that run for a longer time to get deeper responses, etc.


If his evidence of complex counting is convincing, then it's not implausible to me that they soon also had some rudimentary understanding of e.g. coin flip frequencies.


That's not how pre-statistical reasoning works. We have known for a long time that coins tend to land on either side around half the time. But before statistics, the outcome of any individual coin toss was considered "not uncertain, merely unknown".

Before you toss the coin, God has determined with full certainty on which side it will land based on everything riding on that coin toss and all the third-order consequences, in His infinite wisdom. It cannot land on any side other than the preordained. The way you find God's will is to flip the coin.

To the pre-statistical brain it was unthinkable (and probably blasphemeous) to perform any sort of expected value calculation on this.

We know today that the frequency is useful for making decisions around the individual throws. Back then, that connection just wasn't there. Each throw was considered its own unique event.

(We can still see this in e.g. statistically illiterate fans of football. Penalty kicks are a relatively stable random process -- basically a weighted coin toss. Yet you'll see fans claim each penalty kick is a unique event completely disconnected from the long-run frequency.)

Statistics is a very young invention. As far as we know, it didn't exist in meaningful form anywhere on Earth until the 1600s. (However, if it existed in the Americas earlier than that, that would explain why it suddenly popped up in Europe in the 1600s...)

----

Important edit: What I know about this comes mostly from Weisberg's Willful Ignorance as well as A World of Chance by Brenner, Brenner, and Brown. These authors' research is based mostly on European written sources, meaning the emphasis is on how Europeans used to think about this.

It's possible different conceptualisations of probability existed elsewhere. It's possible even fully-fledged statistical reasoning existed, although it seems unlikely because it is the sort of thing that relies heavily on written records, and those would come up in research. But it's possible! That's what I meant by the last parenthetical – maybe Europeans didn't invent it at all, but were merely inspired by existing American practice.


That sounds like one very narrow cultural perspective.


Fatalism is widespread, but not nearly universal enough that we can say it was the norm 15000 years ago.

For that matter, people who were pretty fatalist were still capable of using chance for purposes of fairness. The democrats in ancient Athens come to mind. I'm also pretty sure the (Christian) apostles' use of chance was also more about avoiding a human making the decision, than about divination.


Are you quite sure of that? Historians would beg to differ.

https://en.wikipedia.org/wiki/Cleromancy

https://en.wikipedia.org/wiki/Tyche


I'm not saying divination isn't a thing, I'm saying there are examples of use of chance where it doesn't seem like divination.

Athenians selected through sortition didn't seem to act much like they believed they were chosen by the gods, and they defended their institutions mainly as wisdom, not as revelation.

And the apostles, being Jews, had a big taboo about using chance to determine God's will, but apparently not against using chance to fill vacancies.



There are bible passages suggesting the outcome of lots is God's will, and there are passages condemning divination. You can find them from the same links you posted above. But at the time of the apostles, it was a no-no to use chance to figure out God's will.

Please don't just shake links out of your sleeve, and talk to me instead. Do you think the Athenians acted like they were chosen by the gods when their number came up?

Don't you see a difference between the situations where chance could clearly have been used simply as a mechanism for fairness / avoiding a biased choice, and things like reading the movement of the birds or interpreting the shape of molten lead thrown into water?

Even in things like the goat choice in the bible you link above, I think it may be more about fairness than divination. Because as far as I know, the priests actually got to eat the sacrificial goat, but not the scapegoat they chased into the wild. So was it really about divining which goat God hated more, or was it maybe about "don't cheat by keeping the juicy goat for yourselves and chasing away the mangy one!"?


Yes, but so too is a modern western framing of these “dice” as “gambling” objects.

And also, the esteem in recognizing them as prefiguring a skill or system of thought that fund managers and FDA panels use today. In a roundabout way, it praises our own society’s systems by recognizing an ancient civilization for potentially having discovered some of their mathematical preliminaries.


They found 239 unique sets of dice from 130 tribes across 30 linguistic stocks. Although many of them are "binary lots" there is clear evidence that games of chance are extremely widespread in ancient North America

> His final report includes illustrations and descriptions of 293 unique sets of Native American dice from “130 tribes belonging to 30 linguistic stocks,” and it notes that “from no tribe [do dice] appear to have been absent”. In addition, Culin cites and quotes at length 149 ethnographic accounts of how these dice were used to power games of chance and for gambling. Based on this record, Culin suggested that “the wide distribution and range of variations in the dice games points to their high antiquity”.

https://www.cambridge.org/core/journals/american-antiquity/a...


Yes, I meant to mention that but forgot in my eagerness to respond. Sorry and thanks for clarifying!


From TFA:

> No prehistoric dice have ever been discovered in the eastern part of North America.

Come on, you don’t really think modern statistics might’ve come about from Europeans taking inspiration in the gambling practices of nomadic peoples in remote southwestern parts of North America. No need to pay lip service to every scold.


I don't, when the much more likely answer is that it came from the more than a millenia old gambling practices of Europe.


yeah man these boys were definitely doing bayesian probability and gaussian distributions to operate their sea shell based barter economy


From the original work:

> In a landmark article, foundational to the field of behavioral economics, Tversky and Kahneman (Reference Tversky and Kahneman1974:1130) argued that humans do not infer the statistical regularities embedded in everyday experience because they “are not coded appropriately”—meaning that the quantitative features inherent in these experiences are not isolated, noted, and organized in ways that reveal probabilistic patterns that are usually obscured by the noise of other incoming experience. Intriguingly, Native American dice games appear to perform such a “coding” function. They produce a simplified stream of random events that are carefully observed and recorded at multiple levels: in the scoring of individual dice throws, in the keeping of cumulative scores in single matches, and in tallying wins and losses in multiple matches over time as recorded by the giving or receiving of goods. Therefore, by observing and recording the patterns appearing in these outcomes, ancient Native American dice players repeatedly presented themselves with the very type of “coded” experiences that Tversky and Kahneman (Reference Tversky and Kahneman1974) argued would allow humans to observe and infer the presence of underlying probabilistic regularities.

https://www.cambridge.org/core/journals/american-antiquity/a...


Anytime you bring God into it... the concept of truth has the option of getting very abstract.

It's pretty common, for example, to believe that God is on our side and we will win the war or somesuch. Actually walking onto a battlefield with a literal expectation of divine intervention... much less as common. Pious generals still believe in tactics, steel and suchlike. Not always... but usually.

European pre-modern writers were mostly very pious. The works preserved are likewise very pious. Greek philosophers were often closer to atheists than later Christians.


> Statistics is a very young invention. As far as we know, it didn't exist in meaningful form anywhere on Earth until the 1600s. (However, if it existed in the Americas earlier than that, that would explain why it suddenly popped up in Europe in the 1600s...)

> It's possible different conceptualisations of probability existed elsewhere.

Rudimentary sampling theory 100% predates 17th century Europe: https://ckraju.net/wordpress_F/?p=55


That has barely to do with my specific point. The researcher in TFA said if they were doing complex counting then blah blah blah.

The general insight is that complex counting would force some kind of Bayesian or probabilistic reasoning even one that is informal, intuitive, rudimentary or partly incorrect. Whereas a theory of divining stones usage would have very little actual complex counting involved, maybe they had the tribal equivalent of fortune slips, and so they would not be cognitively challenged to reason about dice. What constitutes complex counting, I don't know, ask the researcher. But IMO it's not out realm of impossibility and time and again we have discovered the old ones of Homo sapiens were more cognitively/intellectually sophisticated than these kinds of scientists assumed earlier. I'm not wedded to this, it would be hard to prove, especially as a hypothesis involving human cognitive constraints/evolution, but I won't dismiss it as completely implausible either. It is an interesting if-then "archaeological cognitive science" argument, that's all.


> it's not implausible to me that they soon also had some rudimentary understanding of e.g. coin flip frequencies

We can actually tell from their dice that they don’t.

I believe in the book Against the Gods the author described ancient dice being—mostly—uneven. (One exception, I believe, was ancient Egypt.) The thinking was a weird-looking dice looks the most intuitively random. It wasn’t until later, when the average gambler started statistically reasoning, that standardized dice became common.

These dice are highly non-standard. In their own way, their similarity to other cultures of antiquities’ senses of randomness is kind of beautiful.


I don't see the point of being confident about this in either direction. I will not assert for certain but (or, IF) they had dice for 12000 years (12,000!) and to be so certain they didn't know anything at all on an intuitive level is a bit strong a position to take, I don't see that as a safe null/default hypothesis.

I had also said "..., THEN it's not implausible" so I don't love how you quoted a strawman in the first place.


It's not entirely crazy. I believe Thorp described this about roulette wheels. If they had no imperfection at all, it would be computationally laborious but not unthinkable to compute the result from the initial positions and velocities. In order to be unpredictable, roulette wheels need to have imperfections. Those very same imperfections, of course, lead to some statistical regularities.

Edit: It wasn't quite that, but very nearly: start reading paragraph 5 in http://www.edwardothorp.com/wp-content/uploads/2016/11/Physi...

In the next article in the series, he explains that in practice, roulette wheels are often tilted and that can be used to gain a further advantage: http://www.edwardothorp.com/wp-content/uploads/2016/11/Physi...


Anecdotally I was on a streak and the dealer was actively concentrating and focusing to get my number again. She managed to get it 4 out of 5 spins. Now she would obviously never admit to this, but I'm positive that she was able to, on this specific wheel, land on the number she wanted.

I think we would've kept going but she rotated off and I cashed out.

Edit: Thorp and Shannon! What a duo. Great articles, thanks for sharing.


The house wants you to think that anyway. If it is possible or not..

The house wants people to win money and tell their friends, and every "winning" strategy is good for them - so long as in the end the house makes money.


I mean, yes, but also no. The house wants you to lose money, but win just enough to think you have a chance. There's a reason those zeroes are on the board.

There's no deep strategy in Roulette, really. I play for fun, and the money I put on the table is already spent.

The anecdote was: I wouldn't have seriously believed that you could reliably manipulate the spin outcome, and as an observer, that's true. I didn't believe the dealer could either, but after seeing this dealer pull it off I definitely see the potential for manipulation. It was almost like she was showing off that she could. And besides, she earned a hefty tip.


> The house wants you to lose money, but win just enough to think you have a chance

The house wants to make money overall. They know that individuals who make money tend to tell more friends than those who lose money - free advertising - so they want some people to make money. The total needs to be the average person loses money, but they need some individuals to make money.

On the small stakes systems they may even like it when they lose money like that - the dealer makes a big tip, and it encourages people (or their friends) to move to a higher stakes bet where they will lose more. They have to be careful about the law (which probably doesn't allow that manipulation if possible, even if it isn't in their favor), but again individuals with a story to tell are worth a lot more than than the money they lose on that story.


I'm not sure what point you're trying to make. If you're trying to suggest that the casinos train or encourage croupiers to cheat so that patrons get winning streaks, then what you're describing is a fantasy. Casinos are plenty successful without those sort of shenanigans.

If anything it's the opposite: pit bosses actively police croupiers who are spinning too consistently, and croupiers are encouraged to vary their spin throughout their session to avoid bias.


If you are the house you probably want to go around every so often and give the wheel a little bump to reset the entropy seed for the day.


The original work suggests the opposite of your conclusion

> In a landmark article, foundational to the field of behavioral economics, Tversky and Kahneman (Reference Tversky and Kahneman1974:1130) argued that humans do not infer the statistical regularities embedded in everyday experience because they “are not coded appropriately”—meaning that the quantitative features inherent in these experiences are not isolated, noted, and organized in ways that reveal probabilistic patterns that are usually obscured by the noise of other incoming experience. Intriguingly, Native American dice games appear to perform such a “coding” function. They produce a simplified stream of random events that are carefully observed and recorded at multiple levels: in the scoring of individual dice throws, in the keeping of cumulative scores in single matches, and in tallying wins and losses in multiple matches over time as recorded by the giving or receiving of goods. Therefore, by observing and recording the patterns appearing in these outcomes, ancient Native American dice players repeatedly presented themselves with the very type of “coded” experiences that Tversky and Kahneman (Reference Tversky and Kahneman1974) argued would allow humans to observe and infer the presence of underlying probabilistic regularities.

https://www.cambridge.org/core/journals/american-antiquity/a...

Also the fact that these games are so widespread (239 sets found across 130 different tribes from 30 different linguistic lots) makes me feel like it's highly implausible that abstractions of the rules of the games did not arise


Maybe this is because dice were originally made from the bones of animals like sheep, which are inherently irregular.


I was going to ask how we know if the dice are intentionally uneven, as opposed to it being a result of technological, cost, or time constraints.



It doesn't matter. The first point raised was essentially"well the dice were just part of a belief system about divinity so they could not have been more sophisticated than that" and then I said that the article's logical reasoning is actually more interesting than that kind of kneejerk dismissal. Just that one line of thought mentioned in the article is intrinsically interesting, because it posits a kind of forcing argument, that if there is evidence for complexity behavior then there is evidence for complex thought required of it. That is an interesting cognitive science kind of argument, different than a flat argument of the type "oh their belief system would have prevented them from developing it".


They should check for new penicillin strains in this forest.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: