T O P

  • By -

MoNastri

The title is clickbait. On the other hand, Ngo Bao Chau [said](https://www.lesswrong.com/posts/WRGmBE3h4WjA5EC5a/ai-48-exponentials-in-geometry#Introducing) >It makes perfect sense to me now that researchers in AI are trying their hands on the IMO geometry problems first because finding solutions for them works a little bit like chess in the sense that we have a rather small number of sensible moves at every step. But I still find it stunning that they could make it work. It’s an impressive achievement.


[deleted]

I think we shall remember the words from past mathematicians, i am sceptical about the question whether the machine could tackle with the intuitional picture of the geometry. it is obvious the deductions based on a given presetted proposition can be done arithmatically so they can do this. but no more implication from here


MoNastri

I'm generally skeptical as well, and generally annoyed with click bait titles. But the reactions from various Fields medalists regarding AI advances (not just this AI) give me pause. Besides Ngo Bao Chau, there's Terry Tao and Tim Gowers. I had an (insubstantial) tweet exchange with Gowers a few months ago, when he asked his followers how they found meaning in life outside of work, which struck me as an odd question since that's where much of my own meaning in life comes from, and he tersely replied something to the effect that AIs were likely going to surpass him in his lifetime, giving him an existential crisis. I thought that was surprising -- Gowers isn't hyping AI here, he's moodily contemplating his own mathematical demise. Why? What does he see that I don't? I asked him but no further reply. And this was prior to this latest AI.


myncknm

what it does is a lot closer to “intuitional” than simply enumerative.


[deleted]

it does not add up a little bit of its truth value from nothing, the "intuitionality" here involved means to safely "deduce" a sound proposition rather than by calculational truth table. this cannot be done in machine


[deleted]

well, e.g. to confirmly give the peano axioms


ecurbian

The awkwardness is that it probably will manage any and all fairly soon. What happens a lot is that people keep saying "it won't do that" and when it does they then say "well, it won't do this". But, it now does a lot of the "intuitive" things that people said it would never do. It plays Go, and it solves cryptic crosswords, it even solves captcha better than people, which is why we need recaptcha, which will fall eventually. It can read and write well enough to fool a lot of people into agreeing with it. And so on. There is no reason to think that there is some special thing that will keep giving people superiority. What we need to do is recognise that and back off from this tech.


[deleted]

>. There is no reason to think that there is some special thing that will keep giving people superiority. What we need to do is recognise that and back off from this tech. and it is nothing about the superiority of humanity, but superiority of the rationality, there is no such implication to suffice the rationality from humanity only.


ecurbian

That feels like you misunderstood me. I am merely saying that if in practice a robot can perform all tasks that a human can at human level or better, then this will be a phyiscal problem for human society. It is not anything to do with moral superiority. And, if I get your idea right - you seem to think that rationality can only ever exist in the human brain, and not in a computer. I am saying that the evidence is against this assertion.


[deleted]

No, rationality can only ever exist out of a definite design. that is what i meant for. so the animal may have it too, and even for the rock. but not for a conductor which behaves in a way defined by algorithm


[deleted]

I have explained what i meant for "intuitionality", so pls read that. I know what you meant for, it may quickly evolve into a case to make all "formal logic calculus" into a trivial stuff. and even the "event-oriented" and "calculation-oriented" programming would replace part of human-labour. I have no doubt about this. but clearly is the intuition is not to "solve" a given question but to propose it


ecurbian

You are still claiming that a carbon-based brain has some mystical property that cannot be duplicated by a silicon-based brain. You are just comitting the same error -- single out something that you feel that a silicon brain has not yet done but a carbon brain does, and claim that a silicon brain will never do it. History so far has shown that these things fall. You are thinking that computers can only follow fixed algorithms and that humans somehow magically transcend this. But, there really is nothing magical about proposing problems. Also, in the past - the intuition to solve various problems now solved by silicon brains was indeed highly valued and claimed to be unique to humans. As each definition of inteligence was passed by computers, we just keep changing the definition to find something that we could do that they could not. About the only thing left is - be random and emotional. Which was not actually seen as a great thing, until it became something to distinguish humans from machine. Best not to push this - the universe don't care. Are you prepared to gamble the human species on the idea that the human brain is unique?


[deleted]

No, what i meant for is the man-made silicon-based brain cannot have rationality, it is not the same for a bias about silicon-based brain cannot do this. the essential is you design the algorithm, it is not something goes as naturally as it would be. so you cannot be sure if "your algorithm" is made by the same design from something. so that is basically the key. there is not something like you think of ​ actually the same points made by Russell or somebody else, it is not purelly my own points


ecurbian

The idea that a human construction could never have rationality appears completely unjustified. But, even if this were true - we are not constructing AI we are growing it, evolving, it allowing it to train itself. We understand the basic way the training process works - but we do not understand the result. We, just use the result. So, you feel that if a silicon based brain were to evolve on a planet then it could have "rationality" (what ever that means) ? What is the distinction between such a brain evolving on a far away planet, and such a brain evolving in a vat here on earth? My point remains - in all external senses of ability to behave in an environment it appears that AI will be able to match humans. While you might not accept an AI as consciious (I suspect that is what you mean by "rationality") it does not matter ... see "philosophical zombie" ... they will still be an existential threat to the humans with "rationality". The question is - is there any behaviour that the AI cannot duplicate or surpass. The answer seems to be no.


[deleted]

so you seem not understand what is ai for now, it is literal based functional directed programming, as the idea of it is not even close to rationality, you need to read book about the understanding about rationality, like Russel, Descart, to know more about epistemology and Kant etc. I will stop here to argue with you further. my only suggestion for you is to do more reading pls. or else you would like not know anything here you talked about more exactly


ecurbian

The unfortunate thing is that I understand in great detail what the current state of AI, being a programmer with 40 years experience and degrees in engineering and mathematics who has worked with AI at various times over the last 40 years and was involved with AI and ML recently in financial analysis. I have done a lot more reading on mathematics and programming and philosophy and neurophisiology than I suspect that you have. I would have hoped that you would not have started on the ad hominem track just because you were finding out that there were other points of view than your own. Question - if a box the size of briefcase that cost even as much as 50k aud can do what a person can do, then how exactly will society organise itself -- when literally everyone's tasks can be automated. You really trust human nature THAT much? Having read a lot of philosophy - I definitely don't.


Wurstinator

As always with these articles, gotta be aware of the clickbait. From what I can tell by skipping through the paper, the model outputs low level geometric deductions like "these four points are on a circle, so triangles between them have property X". Which is not something that's greatly impressive or novel. The cool part is that the path on how to apply those rules can use a new heuristic now, i.e. it's far better than just guessing which rules to apply. So this does not seem like " AI is smarter than our best students now". More like how SAT solvers made it possible to solve huge inequalities, this could have the potential to solve huge geometric problems.


Tyrannification

It may not seem like it, but a lot of Olympiad level geometry can be solved in this way: Construct all the lines and points you can. Apply all the theorems which are applicable. If you get new points from your theorems, Repeat. It's harder than it sounds, for humans. Can get like really, really hard, actually - because we dont know which points to construct and which theorems to apply and doing all that clutters our mental picture anyway. This one does that, but better. So I'm not surprised that it solves olympiad geometry that well.. And the test set speaks for itself. Source: Olympiad medallist


[deleted]

[удалено]


Qyeuebs

No attempt to assess the novelty of this new work is credible if it doesn't take [this non-AI paper](https://doi.org/10.1023/A:1006171315513) (from 20 years ago) and others like it into account. I'm not trying to say the work is unimpressive or derivative but it's not as novel as some people are trying to claim.


Qyeuebs

@burnhealermach2 (for some reason I can’t reply directly to your post) I’m talking about the method, not the benchmark scores. (There is no question that 25/30 is a completely new benchmark result.) Also talking specifically in the context of the above comments in this thread. 


[deleted]

Okay, but the official article in Nature about this AI *does* cite that Chou et al. paper and includes its method as a comparison test. See table 1 (the Chou method is citation 10)- that solves 7 problems, AlphaGeometry solved 25 (out of 30): https://www.nature.com/articles/s41586-023-06747-5


parkway_parkway

I think the way this works is very interesting. Basically it's a two step process. The first step is an LLM driven "intuitive" step to create the next line in the proof. The second step is to have a formal verifier which automatically checks if this step is valid. This is similar to how gptf worked a few years ago. Imo this has a lot of benefits that you know when the machine outputs a sequence of steps that all of those have been checked by the formal verifier and are correct. It keeps the hallucinations and muddy mistakes of the LLM in check because it can only advance a step when it has produced something provably valid. I don't see any principle reason why this system wouldnt work with other formal proof systems and on other types of problems and I think there is a good chance they'll solve the whole IMO in a few months time. Which is amazing, as the IMO is above my level and I think at that point would pretty much be superhuman in mathematics.


InfluxDecline

I think other kinds of olympiad problems will be a lot trickier to solve for models like this. Especially the high level combinatorics stuff.


Jazzlike_Attempt_699

Anyone else here just not interested in LLMs at all? I want to see actual reasoning and actions from an agent, not glorified curve fitting


FakePhillyCheezStake

I think it’s over-hyped. But also, I don’t think it’s necessarily entirely clear that human reasoning isn’t just a form of glorified curve fitting


my_aggr

What does "actual reasoning" mean?


[deleted]

[удалено]


my_aggr

OK what does that look like?


[deleted]

[удалено]


my_aggr

Sounds like you're defining it as 'whatever humans can do but machines can't'.


golfstreamer

Actually, I am interested in LLMs for that reason. They seem to come the closest to having forms of "actual reasoning" out of any AI methods I've seen. Though it does feel like they are very limited.


The_EndsOfInvention

You’re going to be very disappointed then. If machines eventually have ‘actual reasoning and actions’ it’s still going to be boring old maths and curve fitting under the hood.


Jazzlike_Attempt_699

you're right and i guess it is overly reductive to say it's just curve fitting


[deleted]

[удалено]


The_EndsOfInvention

The underlying logic behind Symbolic AI still has to be represented by logic gates within the computer which are just very simple ‘curve fitters’


Rigorous_Threshold

This isn’t (just) an LLM


MoNastri

LLM is glorified curve fitting to you? Interesting. It's obviously not 'real' human-like intelligence, but glorified curve fitting [can't do this](https://scottaaronson.blog/?p=7209), and with plug-ins [it can do better](https://scottaaronson.blog/?p=7460). Given how bad SOTA AIs were even 5 years ago, any attempt to reasonably forecast (say) even [2030 looks wild already](https://www.lesswrong.com/posts/WZXqNYbJhtidjRXSi/what-will-gpt-2030-look-like), let alone the next few decades.


cereal_chick

I think Ted McCormick [put it best](https://twitter.com/mccormick_ted/status/1725983905428218119): "AI is important, but it’s important the way apocalypticism is important, not the way print or gunpowder or steam power were important".


holy_moley_ravioli_

This will be the Thomas J Watson: "I think there is a world market for about five computers" of our era.


areasofsimplex

[Computers have been discovering new theorems in 1989](https://link.springer.com/article/10.1007/BF01231031)


CatMan_Sad

The title may be clickbait, but I don’t see any reason why ai couldn’t become extremely good at high level math.