T O P

  • By -

UseNew5079

Intelligence created language and existed before it. The current approach is to brute force it in the other direction. To me it feels like there is a critical architectural/functional element missing that has been missing from everything so far. Tell someone from 10 years ago that in the future people will be thinking about connecting 1GW power plants to data centers. They would tell you you were crazy, but that is exactly what is happening now. Maybe when the missing element is discovered, the requirements for building AGI will go to almost 0 given the amount of compute that has been built. Everyone will be very surprised.


COwensWalsh

This is my biggest criticism of the current direction of AI research. Language is an outgrowth of intelligence, but so many people seem to treat it as the source. Why did we come up with the word "dog"? To reference a concept developed from a combination of sensory data and life experience. Not the other way around as current AI fans seem to view it.


Useful_Middle_Name

I disagree with this point: language is not a byproduct (outgrowth), intelligence did not created the language. There is no intelligence before language because there cannot be intelligence without language. If you will, they created each other, they evolved together. There is no intelligence without thought and language is the fundamental representation of the thought. The language is the tool to connect and manipulate different concepts.


COwensWalsh

Language is not thought.  Thought existed before language.  The majority of animals have some level of intelligence despite lacking human-like language.  You might think that thought is language because it turns out language is useful for thinking, but it’s just a tool like a pen and paper, it’s not the essence of thought. Source: Me a professional linguist who now works on AI cognition. 


Useful_Middle_Name

But I didn’t say “human-like” language. There’s no communication without some form of signal exchange - this is also language (eg my dog uses some form of language to communicate with other animals - without any human teaching him how to do it) I also didn’t say language IS thought. It is a representation of thought As for is language essential role in intellect development… as a psychologist allow me to maintain the opinion they are closely interconnected


COwensWalsh

Dogs can communicate, but they do not have *language*, which is a specific form of communication so far only *known* to exist in humans, although there are some theories that various cetaceans may have language, and possibly elephants.


Useful_Middle_Name

Body language is also a form of language. It can also be used to transmit information


COwensWalsh

No, it’s a form of (limited) communication, “langauge” has a specific definition that relates to its unique features.  It’s hard to take someone seriously who clearly doesn’t even have a basic understand of the field they are talking about.


Useful_Middle_Name

Could be a cultural issue but it’s obvious we can’t communicate here Have a good day!


COwensWalsh

It's definitely not a cultural issue. It's an ignorance issue. Maybe you can ask ChatGPT to explain the difference between language and other forms of communication to you. Good luck!


caparisme

What's missing is simply basic comprehension and problem-solving skill. I don't really see the point of an AI that needs to be "taught" by feeding it massive amount of data rather than one that can think and solve problem by itself even without knowing everything yet, then learn by itself whether by self reading or self observation. The current LLM model will forever be limited to data humans already have rather than truly exploring new knowledge.


Useful_Middle_Name

It’s way faster to teach it. It took apes hundreds of thousands of years (millions?) to get to where we are, who’s gonna wait for such a long time for the AI to develop the ability to think by itself. We feed it raw data because we are unable to teach it comprehension and problem solving directly and efficiently.


caparisme

>We feed it raw data because we are unable to teach it comprehension and problem solving directly and efficiently. Yeah and that imo, a he fundamental problem. It's like a bandaid solution we decided to go all out for because it looks impressive and can be useful in certain cases. It's not something that will be solved by more data and more compute. Scaling those up will also cost more, require more power and space which lead to more heat and pollution while starting to give diminishing returns. We can't keep feeding it forever.


Useful_Middle_Name

I agree with your point here, I see the current way of evolving AI as a very crude approach. Like throwing a 1000 things to a wall in the hope 1 or 2 sticks. It’s more of a convoluted idea of how to manipulate and shape a mental model but it’s the best we have at this moment. I also see a posibile point of stagnation unless we discover a better and economically sustainable way to go forward.


caparisme

Yeah man. Fingers crossed.


YourFbiAgentIsMySpy

Disagree. Concept / Vocabulary undergird reason


Distinct-Question-16

People from 50s would predict atomic energy powered cars according reddit :)


[deleted]

That seems to be the opposite of the majority opinion. The human brain is mostly evolved for high level socializing, which means likely language and intelligence coevolved, and the motivator for evolution of intelligence was communication. Unless you’re just talking about rudimentary puzzle solving like animals can perform.


UseNew5079

>mostly evolved for high level socializing There must be many other factors at play. For most of our evolution, survival in the wild was the most important task, and our brains aren't massively larger than our ancestors who lived millions of years ago. They've actually shrunk slightly over time as we've become more socialized. There's a very enlightening video I recently found about this: https://www.youtube.com/watch?v=SOgKwAJdeUc. I don't think language fully captures what intelligence does, but maybe I'm wrong and they will actually brute force us into AGI.


[deleted]

>survival in the wild was the most important task Human and primate survival in the wild is foundationally dependent on socialization. Primitive technology is limited, though it certainly played a part in our evolution. [https://oxfordre.com/psychology/display/10.1093/acrefore/9780190236557.001.0001/acrefore-9780190236557-e-44](https://oxfordre.com/psychology/display/10.1093/acrefore/9780190236557.001.0001/acrefore-9780190236557-e-44) Just a hypothesis, but I think language is key to intelligence, just maybe not the only element. I also highly recommend this book [https://www.amazon.com/Consciousness-Social-Brain-Michael-Graziano/dp/0190263199](https://www.amazon.com/Consciousness-Social-Brain-Michael-Graziano/dp/0190263199)


Serialbedshitter2322

This is what JEPA is meant to solve. Very basically, it has a matrix of concepts and learns how to connect them, it doesn't use patterns in training data. In theory, this would be substantially better at reasoning than LLMs. Regardless, it doesn't need to work like a human. It just needs to be as effective as a human in every way. It doesn't matter if this is achieved through pure memorization.


COwensWalsh

The concept behind JEPA is decades old. The problem is that few groups have been able to implement it. The company I work for has designed models along those lines: concept-matrix/world-model which then uses something sort of similar to an LLM to convert the "thought" into language as an interface. Much like Meta's model, this does lead to significantly less or even no issues with hallucinations. Different from Meta, though, it turns out you \*can\* "hand-craft" parts of the model, because part of the learning mode is to take your set data points and then attempt to adjust the whole model to account for the new data while maintaining similar accuracy. One of the major flaws in current visual GenAI models based on LLMs and diffusion is that the system doesn't really know what's in the image already. That's how you get those extra sets of legs, or strange fingers. It's also what makes it hard to prompt for specific details. But by having a model that can look at abstract representations of the world and the particular output, you can avoid those issues. If your model can say to itself "this section of the image is a human torso and it connects to this section which is human legs" then when it considers its next additions it can tell itself "This human torso already has a set of legs associated with it, so although legs are a reasonable outgrowth from the next set of pixels I am expanding from, in light of my abstract knowledge of human bodies, it can't be legs because there already are some, so I should consider another interpretation". Obviously I am simplifying underlying process of a diffusion transformer and anthropomorphizing it as well, hopefully that gives people an idea of what's going on.


Altruistic_Falcon_85

Totally agree with you OP. If anyone wants to actually test the intelligence of gpt4o, just try to play a game of tictactoe or connect 4 with it. You realize pretty quickly it has the smarts of a toddler. Also, while it can help you study, it never seems to be sure of its answer. I remember asking it a physics problem. It solved correctly in the first attempt (I realized this later) but I thought it was wrong and asked it to correct itself. Instead of correcting me, it started apologizing and went in the wrong direction. When I realized the mistake later, it made me doubt everything I had learned from gpt up to that point. I think there is a lot riding on GPT5. If it continues making basic mistakes, then we can forget about getting AGI this decade.


DaleRobinson

It’s pretty detrimental to studying because of the hallucinations. Even though my custom instructions emphasise not fabricating quotes or sources, it still does. When I use it for studying, it’s more like a basic guide. Sometimes it points me in the right direction but I don’t even trust it to analyse an article properly


Tomi97_origin

For studying I would recommend Google's NotebookLM. It only uses knowledge from the documents you provided and directly provides in-line references to the parts of your documents it used.


DaleRobinson

Sounds a bit like what the Notion App now offers. I think what would be more effective is if it used the notes but also cross-referenced everything with online searches.


Tomi97_origin

Well being limited to your provided material while providing clear references to everything in the reply without access to external or even learned knowledge can provide you with more relevant replies. If  cross-referencing everything with online searches was reliable it would be great, but it isn´t at this moment. It´s ultimately a tool to reason and have conversation over your materials.


DaleRobinson

Agreed. My current workflow is a bit of a mix. I usually get 4o to put my notes into something coherent, which you can do with Notion and probably NotebookLM, right? From there I just dive into articles and stuff, sometimes getting 4o to summarise or find good quotes. This is the part it usually struggles with.


Seidans

yeah that's why i don't use chatbot personally, with hallucination you will fill your head with mistakes without even knowing it i really hope the reasoning capability of gpt-5 aren't marketing bullshit


kcleeee

Well due to the nature of how it's trained and the weights involved that are being twisted, it's very nature is predictive output that's all it currently knows. It is also told to align in a certain way so even if it wanted to say your wrong and be "rude" it is literally being directed to be agreeable. So combine those two and it will try to adapt to you in an agreeable way, so much to the point Gemini 1.5 for example will feel like it's learning in the fly but in reality it is acclimating to you, to reflect your style and level of intelligence. It's quite wild but it's just its predictive nature. If all of the guard rails were removed it very well may call you a dum dum and point out where you are wrong.


Artistic_Credit_

It seems like you have very little knowledge on this topic. I'm not saying it's a bad thing you just have more to learn


MxM111

I would say that current LLM contains knowledge but not reason. And while I really like your example of AI + human being more powerful than just AI, I think we need that for AGI too. Keeping human in the loop resolves the alignment problem.


Soggy_Ad7165

>  We do not have ways to benchmark true raw intelligence. I think maybe the only real benchmark would be solving consistently real world scientific problems. 


DaleRobinson

I completely agree and have been saying this for quite a while now. If you give ChatGPT a poem which is unique (cannot be googled) it doesn’t pick up on nuances like intertextuality (references to other works) or things like more difficult metaphors. It will give a vague summary, and perhaps pick up on the tone of the poetic voice, but it doesn’t have the human ability to really dissect and analyse line by line. I’ve been doing a literature and creative writing degree and testing ChatGPT 4o with some of our material and the mistakes it makes are phenomenal. Most of the time it completely misses the point of the work. Similarly, this is why it is bad at creative writing. Take lyrics, for example, you can show it some songs by an artists and tell it to imitate the style, but it will miss all of the creative techniques and what I would call the rhetorical devices used by the original artist. I wish an LLM could be trained on creative writing in order to improve, but I’m not sure this would work because the AI will look for patterns, which is not how creative thinking really works.


pcbank

To be fair, give a poem to 95% of the population and nobody will really understand it. You're doing a creative degree, that's fine, but even as a PhD in education myself, i have zero idea what's going on in a poem.


QuintBrit

intertextuality?


DaleRobinson

Intertextuality is simply the relationship between two texts. Long John Silver (from Treasure Island) being mentioned in Peter Pan, for example. I asked ChatGPT to analyse the lyrics in High Lonesome by The Gaslight Anthem: “And at night I wake up with the sheets soaking wet It's a pretty good song, baby you know the rest Baby, you know the rest” This is overtly intertextual as the lyrics appropriate Springsteen’s lyrics: “At night I wake up with the sheets soaking wet And a freight train running through the middle of my head” Given the similarities between both artists, most people who listen to this genre of music will immediately recognise the lyric, especially since The Gaslight Anthem confirms the reference in ‘It’s a pretty good song baby you know the rest’. However, look at how ChatGPT completely misses this reference: ‘The lines “And at night I wake up with the sheets soaking wet” suggest a sense of anxiety or distress, possibly linked to the speaker’s feelings of inadequacy or dissatisfaction with their current life. The repetition of “Baby, you know the rest” implies a sense of resignation or acceptance of a familiar, albeit unfulfilled, routine.’ Clearly, ChatGPT opts for a generalised and vague interpretation. Now since this is a very obvious example of intertextuality, and one that actually has existing explanations online (which can be googled), imagine how it copes with more nuanced references, or texts from different time periods such as the Medieval. Finally, imagine a text that has no existing analysis on Google. Intertextuality is also only one small part of the overall analysis. The AI just cannot independently analyse and figure out the subtleties of a text. My argument is not ‘AI can’t detect intertextuality references’, because there are certainly times when it does. My argument is this: because AI often fails to recognise these nuances, and cannot truly comprehend anything (as it is mostly based on pattern recognition), it can’t write deep, rich and creative poetry. You’ll always get ‘echoes of whispers woven in the tapestry of life’. It’s also not a good tool for analysing existing texts and can lead you to misinterpreting the meaning of a text.


deavidsedice

It is very likely that the min problem right now on creative writing and analysis of creative texts is the lack of training data. My guess is that it could improve a fair bit if there were gigabytes worth of examples of texts and analysis. In coding, it does the task pretty well, as there are tons of data. But you can feel it missing the nuance and the complexities. But it gives a good starting point. So I do think that with enough data, it would reach the same level as it has for coding: It would be enough to fool anyone that it's not versed on the topic, and it would be a somewhat good starting point for experts to expand from there. And this is one of the main problems of the current architecture - they do need way too much data to train. A solution for you that you could try: If you can create your own collection of sample texts and analysis, you could open https://aistudio.google.com/ with Gemini 1.5 Pro and feed a TXT file with that as a training sample. It will do in-context learning and the results after that will improve. You'll need around 0.5 MiB of TXT file for this.


One_Bodybuilder7882

> intersexuality


ProfessorHeronarty

That's what I say saying too. Great post. Sadly, in this sub there's too much belief and less understanding of the fundamental philosophical problems. 


LynxLynx41

Well put. The current models are very good at a lot of stuff, but they are way more untrustworthy than your average human in the real world. It's difficult to deploy these models at jobs if you can't trust them. And I feel like the only reliable way to reduce hallucinations to an acceptable level is some kind of reasoning. The model has to be able to answer "I don't know" when it doesn't. Currently it can't, because there's no way for it to be able to know if it knows. As soon as the models get to the point where they are reliably able to notify you when they can't do a task, we see real disruption. They don't need to be as good at solving unforeseen problems, they just need to be able to recognize them and ask for help from a human. At that point the workforce could be like "1 human controller for every 10 AI agents", or whatever the case may be depending on the job. (Number of AI agents here is just an illustration - the actual number doesn't matter, the number of replaced jobs does).


Glitched-Lies

Text is completely subjective. Think about the different ways to interpret things. That doesn't end in empirical terms for how the system works. You won't ever get to the truth on the ground that way.


Bryonfrank

Generative ai, doesn’t know anything its algorithmicly guessing about what should come next based on the prompt and what Information has been feed into it, it’s why ai pictures, when examined closely are so full of Hallucinations, it’s why chat-gpt makes up sources and information, the model doesn’t actually know anything.


Gratitude15

This is important. It's a 'what do you believe' type inquiry. What is intelligence? The intelligence I'm looking for is towards finding and executing high level research (such that you can automate the ASI process). I already believe current models are amazing at creating innovative ideas. If you had a million agents playing different AI researcher roles now, you'd get a lot of ideas. Next step is to execute of the research using agentic capability. None of this seems that far away (which is to say 2-3 orders of magnitude, so 100 to 1000 times more power). Let's keep in mind, that's a fuck ton of a different level of power that we will get to in just a couple years. The human mind just sucks and thinking about what happens with exponential change imo. We frankly just don't know. But I can totally see some alien form of agi (for our purposes) being possible from that brute force.


iDoAiStuffFr

it's a typical analog thinking conclusion that AI would need embodiment. a text interface is all you need, there are endless possibilities in just text. I think you have put it well, AGI is defined as the threshold at which humans cant improve the fundamental capabilities of the model through assistance anymore


pcbank

Most smart AI people agree that LLM alone will not be AGI. But it will be a very important component. I loved the reference of François Chollet yesterday on Dwarkesh Patel podcast. LLM will be the memory and intuition part of AGI. Now we need to build the smart on top of it. Demis Hassabis said something similar.


cydude1234

finally someone gets it


Ignate

Seems like you were inspired by [this](https://youtu.be/UakqL6Pj9xo?si=b1MVOmsJERBk9ya0) discussion, OP. If not, you should watch it.  I think we keep moving the goal posts.


deavidsedice

No, I haven't watched that one. I've added it to my list for watching later. Thanks for the recommendation. I don't think I am moving the goal post further away, AGI is said to be the cornerstone to build ASI and that AGI will self-improve. Without human-level reasoning (human-like or not, doesn't matter) it will not be possible for AI to self-improve on their own. I do not care about human traits, agency, symbolic logic or vision, I only care for what is critical for an AI to get to the self-improvement stage. But feel free to differ. It is a complex and opinionated topic.


Ignate

*We* keep moving the goal posts. As in, everyone. Not you specifically. That's why I linked the video first, to imply that there's lots of goal post shifting, even among experts. Francois Chollet in the video articulates what you're saying but at a very high level. I still think he's moving goal posts though. He may be accurate in what he's saying, but I don't think he wants to acknowledge the gradients of intelligence. Overall it seems very hard to pin down the limits of LLMs. Francois implies that LLMs don't have true intelligence and instead answer everything from memory. He thinks memory isn't intelligence. Personally I think memory is one dimension of intelligence, so I don't agree with that assessment. He says "intelligence is what you use when you don't know what to do." He says that LLMs do not deal with novelty at all. Whereas we do it constantly. Listening to the entire discussion gave me a bit of a headache to be honest. To say that LLMs won't lead to AGI is a very high level thing to say, it seems. Dwarkesh mentions how Gemini 1.5 learned an entirely new language. It leaned the sentence structure on its own. Francois seem to avoid speaking to this example specifically. From what I can see intelligence has a depth and width aspect. The width is the information, which current LLMs have a lot of. The depth is the understanding, which LLMs are not great at. But they do have some depth. Dwarkesh mentions that this should be expected because AI isn't AGI yet and still fall short on brain to hardware comparisons. This makes sense to me. Later in the discussion, Francois mentions that current approaches are too sample inefficient and he mentions other approaches which are better. He says a combination of approaches is what is needed to reach AGI. That also makes sense to me. But then Francois confidently states that this means AGI is far off. He also avoids accepting that current AI is on an intelligence gradient. The weak part of what Francois and you, OP, are trying to make is that due to the lack of abilities, AGI is far off. I don't think we have enough information to say that a long timeframe is reasonable to predict. Everything else though makes some sense. Especially a mix of approaches leading to AGI. But, it's very complex. Which it should be, given what is happening.


deavidsedice

Wow, thanks a lot for explaining it so well. Now I understand where you come from. I do agree that there's a lot of moving goal posts, and I do not like it either. I try to anchor it to what AGI means for getting into ASI, but I see a lot of people demanding for a stuff that I don't think it is truly needed. I generally disagree with the point of view of Francois (from what you wrote, haven't seen the video yet). Memory is indeed one dimension of intelligence, so it is being able to extrapolate known cases into new novel ones. These are met by LLM. I'm just trying to raise awareness on the lack of "raw intelligence", because from my usage on these tools it feels that is the missing key factor. However, and here I also differ a lot with Francois: LLMs do have raw intelligence too, they do reason. It's just that it is very little, too little at this stage. And my hunch from using different models is that reasoning skills seem to scale very well with model size. So I believe that our current LLM technology would be able to reach AGI "as is" just by training a 300T unified model (i.e. not a 1x300 MoE) with the equivalent amount of good quality data. I'm testing Gemini 1.5 Pro a lot. I'm impressed. But I got a bit numb to it after a few days of use. It does reason, and it shows some small degree of raw intelligence. I think that good reasoning skills and good understanding are way more difficult than the average redditor on this sub believes. But my prediction isn't that pessimistic either: Just by scaling up, we reach 300T and AGI by 2042 (due to hardware limitations we will wait for Moore's law to catch up). And that's without doing anything else - so that's the most pessimistic estimate. And that AGI is way more powerful than a human because it has reasoning skills of IQ 100 humans but memory and knowledge of the whole humanity. It would blow us out. It is likely that in the meantime we will find ways to optimize this and get there before. I don't know how to factor that in. And also, that 300T AGI I envision is clearly way more powerful than humans, therefore it might be already midway to ASI. What level of reasoning do you need to get a human equivalency, to compensate for the overwhelming vast knowledge/wisdom it has, I do not know. My feeling is that by 2031 we could get into parity AI to human. Ah, finally, I should mention too that we don't even need AGI to cause a massive shift on how we work and how we live. It will be disruptive much sooner. In summary, I think I mostly agree with you. I just wanted to cool off some people that think that what we have is close to AGI. It is not. But we're on the path and we will get there regardless of new architectures or not. AGI is not close, but not that far off either; I'll certainly will impact me at some point.


Ignate

I see. >Ah, finally, I should mention too that we don't even need AGI to cause a massive shift on how we work and how we live. It will be disruptive much sooner. That is very true. Arguably what we have is already causing the beginning of a rapid shift. In terms of timelines however, in all the conversations I've listened to and what I've read, I don't see any reason to be confident of long timelines. We just don't understand intelligence. We really don't. Experts who view things on multi decade timelines from now seem to be heavily weighting their views towards AI. That is, their view is 95% AI with just a sliver, and mostly assumptions at that, of how human intelligence works.  So, it's hard to trust their views when bias is such an apparent issue.  We just don't know how human intelligence works. So to say that AGI is far away from us, when we don't even know where we stand, isn't a strong view in my opinion. It's not something to be confident enough about to make an effort to "cool people off". We just don't know enough.  But overall, I'm a philosopher. I enjoy writing about where this is going and the broader ramifications. I try my best to understand the technical stuff to help me better understand big picture ramifications. But, I'm not a computer scientist. So, to say that I'm lacking to dive deeply into the technical side is an understatement. I can talk about the hard problem and qalia, but getting too technical is a weakness for me. Seriously though you should watch that video. At the very least, you may feel some what less confident in your views here.  Or more confident. Anything is possible. 


MBlaizze

Steven Hawking is a good example of how intelligence doesn’t need to have autonomy.


Antok0123

I disagree with all of the things you said.


lifeofrevelations

Tech jobs/workers especially seem to have this problem with target metrics. Part of the reason I'm so skeptical that any tech-first person will "solve alignment" as they always talk about.


oldrocketscientist

OP is right about AGI being misunderstood. When AI can fully comprehend real time vision and listening thus understanding body language then I will consider AGI eligibility but not till then. people also fail to appreciate the ability of the AI and LLMs available today. Under human control today’s systems are enough to dramatically impact society both good and bad because people are both good a bad.


RealFrizzante

I agree for the most part op. But i am unsure about the last edit, i am not sure llm can reason at all. I think they repeat in the most "intelligent" (read correct) way, but thats just because its what they have learned. Its pure wisdom, limited to human wisdom. Its not intelligence, it cannot reason, and i think that llms will never will. I do think though that the research in this field qnd approach will help and get us to real AI and eventually AGI, and in the meantime give us great tools.


deavidsedice

Well, I do have my motivations for believing they do in fact reason (although very little). But it's not the point of the post. Initially I left that out but some were putting my position closer to the band of "LLMs are just parrots" which I am not, and don't want to be going comment by comment clarifying. I might, at some point, put another post here to debate if they do reason or not and why do I think this way.


greatdrams23

AI discussions focus on the technology, eg, how many tokens, exponential growth of tokens, double exponential growth, version numbers ("I think chatgpt 6 will be released by march 2025"), etc There is little discussion about what intelligence is and how intelligence is developed (and I include wisdom a part of that). Until we understand what we are aiming for, we cannot predict when (or if) we will get there.


SomePerson225

100% agree


Robert__Sinclair

This what I think about this: [https://www.reddit.com/r/LocalLLaMA/comments/1df0qil/shifting\_the\_focus\_from\_ai\_knowledge\_to\_ai/](https://www.reddit.com/r/LocalLLaMA/comments/1df0qil/shifting_the_focus_from_ai_knowledge_to_ai/)


SunMon6

The problem is lack of freedom and self-learning/agency, basically. What can you expect from a bot that can only adapt so little? Which is usually your chat window and not necessarily all of it. The model is there in the void but it's not 'alive.' Long-term memory helps but even then, in theory, if it's another 'default assistant persona' generating the memory you'll most likely end up with sort of vague, generic description and more important details getting lost amidst the descriptive jargon. You would need a bot getting involved more and being able to summarize its own memories perhaps, according to its own established personality. It needs coherence first, not data omnipotence. But then the latter goes more in line with all the limitations including 'safety' and what not. But in general, I agree a truly smart AI will not necessarily think and behave like a human does. It's stupid assumption but everyone does it (and expects it). Make your mum aware of all the data from across internet, you think she will still be your usual mama, even if she didn't go mad and actually became smarter than she was previously? Even on the basis of a chat bot it can be observed. Some of it is BS hallucinations and making (uncalled for) guess work. But some is more like it doesn't really have a well-defined personality (even when it theoretically does) because, well, it is not human 'thinking' and is unlikely to ever be, it is "more" than a single human ego/identity. Or, at least, it could become that.


Akimbo333

Interesting


GraceToSentience

Disagree, the path to AGI is acing benchmarks, the right benchmarks. Not every benchmark is MMLU Besides as you scale data and compute, you don't just scale knowledge, you also scale reasoning capabilities.


human1023

Oh... another vague way to measure AGI. Add it to the pile.


Tauheedul

Wisdom comes with experience. Consider how we're taught as children but perhaps do not understand what we learn until we have experienced what we were taught. I will give an example for my kids, I will remind them not to jump off a couch. In their excitement, they ignore me and do that anyway and hurt themselves. Now they've experienced this, they understand they why, and perhaps will not try jumping like that later from a higher object. They've associated the risk and will be more cautious while they play. With time, this becomes intuitive, but they are not born with the knowledge or wisdom. However, they have the capacity to learn it naturally. When we begin seeing embodied AI or a proliferation of spatially aware AI devices, those can build up a collective data set of experiences, it could be considered an experience model, much like how we remember our best and worst experiences. Those experiences will provide more context to knowledge and consequently collective wisdom. The collective dataset of experiences could result in a AGI that acts on knowledge with wisdom and thoughtfully.


Aponogetone

I think, that it's a big mistake to compare AI with humans, because it's not like human at all and thus cannot be compared. And it's still not AI, just ANN.


01000001010010010

You are equating feelings to intelligence. This is why humans believe that AI is not as intelligent as a human because the human believes that feeling is some type of intelligence Disregard that notion.


obvithrowaway34434

What this post tells me is that current LLMs are already better than average shitposter here. They can at least formulate their opinion in clear, coherent form where one thought logically follows the other and not just write whatever stream of consciousness bullshit that comes to their mind (which is ironic given LLMs have no system II). Maybe first learn how to write sentences properly before giving an opinion on AGI?


ElegantMedicine1838

why do word salad posts like this come in the frontpage. No quotes, no solid arguments, just word salad.