Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence? Surely the trend has to break at some point, if so what would be the thing that crosses the line to into real intelligence?
> Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence?
Hah. It reminds me of this great quote, from the '80s:
> There is a related “Theorem” about progress in AI: once some mental function is programmed, people soon cease to consider it as an essential ingredient of “real thinking”. The ineluctable core of intelligence is always in that next thing which hasn’t yet been programmed. This “Theorem” was first proposed to me by Larry Tesler, so I call it Tesler’s Theorem: “AI is whatever hasn’t been done yet.”
We are seeing this right now in the comments. 50 years later, people are still doing this! Oh, this was solved, but it was trivial, of course this isn't real intelligence.
That is a “gotcha” born of either ignorance (nothing wrong with that, we’re all ignorant of something) or bad faith. Definitions shift as we learn more. Darwin’s definition of life is not the same as Descartes’ or Plato’s or anyone in between or since because we learn and evolve our thinking.
Are you also going to argue definitions of life before we even learned of microscopic or single cell organisms are correct and that the definitions we use today are wrong? That they are shifting goal posts? That “centuries later, people are still doing this”? No, that would be absurd.
I don't see it as a gotcha. Just an (evergreen, it seems) observation that people will absolutely move the goalposts every time there's something new. And people can be ignorant outsiders or experts in that field as well.
For example, ~2 years ago, an expert in ML publicly made this remark on stage: LLMs can't do math. Today they absolutely and obviously, can. Yet somehow it's not impressive anymore. Or, and this is the key part of the quote, this is somehow not related to "intelligence". Something that 2 years ago was not possible (again, according to a leading expert in this field), is possible today. And yet this is somehow something that they always could do, and since they're doing it today, is suddenly no longer important. On to the next one!
No idea why this is related to darwin or definitions of life. The definitions don't change. What people considered important 2 years ago, is suddenly not important anymore. The only thing that changed is that today we can see that capability. Ergo, the quote holds.
See, that’s a poor argument already. Anyone could counter that with other experts in ML publicly making remarks that AI would have replaced 80% of the work force or cured multiple diseases by now, which obviously hasn’t happened. That’s about as good an argument as when people countered NFT critics by citing how Clifford Stoll said the internet was a fad.
> made this remark on stage: LLMs can't do math. Today they absolutely and obviously, can.
How exactly are “LLMs can’t” and “do math” defined? As you described it, that sentence does not mean “will never be able to”, so there’s no contradiction. Furthermore, it continues to be true that you cannot trust LLMs on their own for basic arithmetic. They may e.g. call an external tool to do it, but pattern matching on text isn’t sufficient.
> The definitions don't change.
Of course they do, what are you talking about? Definitions change all the time with new information. That’s called science.
The definition of "can/cannot do math" didn't change. That's not up for debate. 2 years ago they couldn't solve an erdos problem (people have tried, Tao has tried ~1 year ago). Today they can.
Definitions don't change. The idea that now that they can it's no longer intelligence is changing. And that's literally moving the goalposts. Read the thread here, go to the bottom part. There are zillions of comments saying this.
You are keen to not trying to understand what the quote is saying. This is not good faith discussion, and it's not going anywhere. We're already miles from where we started. The quote is an observation (and an old one at that) about goalposts moving. If you can't or won't see that, there's no reason to continue this thread.
> The definition of "can/cannot do math" didn't change. That's not up for debate.
That is not the argument. The point is that the way you phrased it is ambiguous. “Math” isn’t a single thing, and “cannot” can either mean “cannot yet” or “cannot ever”. I don’t know what the “expert” said since you haven’t provided that information, I’m directly asking you to clarify the meaning of their words (better yet, link to them so we can properly arrive at a consensus).
Good example. There are no literal goal posts here to be moved. But with the new accepted definition of the words, that’s OK.
> There are zillions of comments saying this.
Saying what, exactly? Please be clear, you keep being ambiguous. The thread barely crossed a couple of hundred comments as of now, there are not “zillions” of comments in agreement of anything.
> You are keen to not trying to understand what the quote is saying. (…) If you can't or won't see that, there's no reason to continue this thread.
Indeed, if you ascribe wrong motivations and put a wall before understanding what someone is arguing, there is indeed no reason to continue the thread. The only wrong part of your assessment is who is doing the thing you’re complaining about.
He’s a booster and I don’t think he argues in good faith.
He seems to be fixated on this notion that humans are static and do not evolve - clearly this is false. What people thought as being a determinant for intelligence also changes as things evolve.
I've spend a good chunk of time formalising mathematics.
Doing formalized mathematics is as intelligent as multiplying numbers together.
The only reason why it's so hard now is that the standard notation is the equivalent of Roman numerals.
When you start using a sane metalanguage, and not just augmrnted English, to do proofs you gain the same increase in capabilities as going from word equations to algebra.
Well, the famous Turing test was evidently insufficient. All that happened is that the test is dead and nobody ever mentions it anymore. I'm not sure that any other test would fare any better once solved.
When will LLM folks realize that automated theorem provers have existed for decades and non-ML theorem provers have solved non-trivial Math problems tougher than this Erdos problem.
Proposing and proving something like Gödel's theorem's definitely requires intelligence.
Solving an already proposed problem is just crunching through a large search space.
I think GIT is a negative answer to a problem originally posed by David Hilbert. It was not proposed by Goedel originally. I think Goedel's main new idea was (i) inventing Goedel numbering (ii) using Goedel numbering to show that provability from a finite FOL signature, and a single FOL formula, is reducible to an equation involving primitive recursive functions (iii) devising a method to translate FOL statements about arbitrary primitive recursive functions into statements about only the two primitive recursive functions + and ×.
Later work establishing the field of computability theory (or "recursive function theory" as it was then known) generalised the insights (i) and (ii). In light of that, Goedel's only now-relevant contribution is (iii).
> When will LLM folks realize that automated theorem provers have existed for decades
This is very misinformed. Automated theorem proving was, sadly, mostly a disappointment until LLMs and other Machine Learning techniques came along. Nothing like the article's result was remotely within reach.
I think the point the GP is making is that Gödel's theorem wasn't part of any "genre". Gödel, or somebody, had to invent the whole field, and we haven't seen LLMs invent new fields of mathematics yet.
But this isn't a fair bar to hold it to. There are plenty of intelligent people out there, including 99% of professional mathematicians, who never invent new fields of mathematics.
I've had a similar notion that Time() is a necessary test function. Maybe it's because of the limitations of human cognition. (We have biases and blind-spots and human intelligence itself is erratic.)
I find it's helpful to avoid conflating the following three topics:
/1/ Is the tool useful?
/2/ At scale, what is the economic opportunity and social/environmental impact?
/3/ Is the tool intelligent?
Casual observation suggests that most people agree on /1/. An LLM can be a useful tool. (Present case: someone found a novel approach to a proof.) So are pocket calculators, personal computers, and portable telephones. None of these tools confers intelligence, although these tools may be used adeptly and intelligently.
For /2/, any level of observation suggests that LLMs offer a notable opportunity and have a social/environmental impact. (Present case: students benefitted in their studies.) A better understanding comes with Time() ... our species is just not good at preparing for risks at scale. The other challenge is that competing interests may see economic opportunities that don't align for social/environmental Good.
Topic /3/ is of course the source of energetic, contentious debate. Any claim of intelligence for a tool has always had a limited application. Even a complex tool like a computer, a modern aircraft, or a guided missile is not "intelligent". These tools are meant to be operated by educated/trained personnel. IBM's Deep Blue and Watson made headlines -- but was defeating humans at games proof of Intelligence?
On this particular point, we should worry seriously about conferring trust and confidence on stochastic software in any context where we expect humans to act responsibly and be fully accountable. No tool, no software system, no corporation has ever provided a guarantee that harm won't ensue. Instead, they hire very smart lawyers.
Both can be totalitarian. Both are shit imho. I just don't buy the argument that China is worse because of it.
But if we start nitpicking the US also executes people all over the world without trial and has secret prisons worldwide where they put people (guess what) without trial.
That pro forma response grows oh so very tiresome.
For the nth time: scale, easiness, and access, matter. AI puts propaganda abilities far beyond the reach of those men in the hands of many more people. Do you not understand the difference between one man with a revolver and an army with machine guns? They are not the same.
Nowhere in my comment am I “blaming the tools”. I’ll ask you engage with the argument honestly instead of simply parroting what you already believe absent reading.
Did you do a net benefit calculation? If not, all these knee jerk anti-AI comments are tiresome and predictable (see luddites).
> I’ll ask you engage with the argument honestly instead of simply parroting what you already believe absent reading
I did engage with argument. The argument is a tiresome old argument that is knee-jerk anti tech. You seem to be the thoughtless one in this discourse repeating for the infinite time an anti-tech position assuming net negatives outweigh massively net +ves.
Also, why attack me instead of the argument? Did I touch a logical sore point? I believe so.
> For the nth time: scale, easiness, and access, matter.
By that logic, So the printing press was evil? Remember, Mao/Stalin/Hitler used presses to spread their propaganda.
Also, for the n+1 time, using your own style, don't be lazy:
1. Come up with a net benefit calculation for AI. What? You can't? Then, don't try to claim this is all net negative.
2. Explain how AI is different from other tech like the printing press, that also had scale, easiness, and access.
FYI, these jobs pay the highest in the world. If these jobs are exploitative, then so are other non tech jobs that employ citizens and pay lower wages.
My former organization employed ~750 contractors developing software.
Their billable rates ranged from $44-76/hr in 2022. The people in the cafeteria probably made more. They get minimum viable salary like indentured workers in hopes of getting a green card and more opportunity.
It is interesting to see the different views on immigration. Here in the UK, leading up to the brexit vote, everyone said blue collar workers were the problem, because they depressed wages for the poor and made the middle class richer because they could build cheaper houses, pick cheaper crops, etc.
In Singapore, the rage is mostly against higher earner immigrants, because they take all the good jobs, making the middle class in Singapore poorer.
I'm sensing a bit of a mix in your US centric argument.
All in all, a lot of people just hate immigration, always have, always will. It is a topic as old as time.
Opportunistically venturing out of Africa is one thing. Sending a couple people around a distant and desolate rock, while the homeworld burns due to unforced errors, is another.
Alternatively, if we don't become a multi-planetary species, we will be exterminated by a meteor. There's enough excess to do a bit of species saving multi-tasking.
For an alternate perspective, the development for this (which includes future launches) was only 80% the cost of ~500 miles of railway in California! [1]
> when we cannot sustain ourselves on the rock we evolved on.
The population is as big as it has ever been, and growing. Hunger index is steady [1], with low scores concentrated in the usual failed African states. We are sustaining ourselves just fine, by all metrics.
There are future problems, but there are also future solutions. A surprise meteor from the blackness of space has no solution. Both are bad. Multiple efforts for multiple problems can take place at once. One does not negate or even influence the other. Corruption in our government it's a much bigger money sink, and risk to our future, than a moon mission.
The AI world is moving incredibly fast. A lot of SaaS company valuations are down. You can’t honestly say Oracle is operating in the same conditions as last year.
Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.
Most intelligent people do not think that.
Eventually, we will arrive at the same conclusion for what LLMs are doing now.
reply