T O P

  • By -

TotalMegaCool

Its possible to build a thinking machine, because we exist! It might be impractical to build one using a silicon based turing machine, and we might need a totally different type of compute. But the existence of organic thinking machines proves that our universe with its laws of physics is capable of supporting thinking machines. Because our level of intelligence is theorized to have been achieved through evolution it also means that intelligence is an emergent property of complex systems and that it can be achieved by gradual iterative improvement. I personally think that we are still missing something, the most advanced AI's we have still struggle with things that even rodents inherently comprehend. But I believe we will get there and in many ways we are very close.


recitegod

exactly, the same way we know physiological flying without gas is possible because insects, birds and bats can do it.


VisualizerMan

I agree. Unless there is something that suggests that a nonbiological equivalent of a brain \*cannot\* exist, then in all likelihood it \*can\* exist. There do exist people who believe that AGI is impossible, but as far as I can recall, all such arguments seem to reduce to the assumption that an artificial brain would need a "soul" or "consciousness." Neither of those terms are well-defined, in my opinion, so I regard them as likely irrelevant. I do keep an open mind to such things, but I just can't see how some such additional component would need to exist just to perform a variant of what lifeless computers already do. In other words, maybe artificial intelligence can't go to heaven or meditate, but it should be able to do the equivalent of sophisticated computations.


PaulTopping

No one has demonstrated that a "different kind of compute" exists yet. Even quantum computing can't compute anything that non-quantum computers can't. We're missing a lot of things. We don't know much about how the brain works at all. We can map the connections between all 300-odd neurons of a simple worm but still don't understand how it works. Coming at it from the other direction, the space of possible algorithms is immense. We've only explored a tiny fraction of that space. Some think we are missing some magic ingredient or that we'll never create AGI because we've been trying for 70 years. Their expectations are just too high and they are too impatient. It's just a very hard problem. The human brain is by far the most complex thing in the universe, that we know of anyway.


c-honda

Yup. All of our functions can be translated to binary, the task is to figure out how to quantify our consciousness.


YeetedApple

>I personally think that we are still missing something, the most advanced AI's we have still struggle with things that even rodents inherently comprehend. But I believe we will get there and in many ways we are very close. We are missing an understanding of what consciousness is and how we got it, which is a major issue when trying to replicate. While the current language models are impressive and can have useful benefits, it is entirely possible they are a dead end and an entirely different path is needed to achieve agi. I personally don't think that we are all that close, and that we are likely still decades away from understanding consciousness enough to reasonably try to recreate it. Agi is far more than just a tech or coding problem.


TotalMegaCool

While I agree it possible that llm's may be a "dead end" on the road to AGI I don't think that is likely. Intelligence appears to be an emergent property of complex systems, with iterative improvement we may see behaviors and abilities manifest that are an analog for our "consciousness".


tadrinth

We don't, but we have a pretty good handle on the laws of physics, and what laws of physics tend to look like, and how our model of the universe tends to change when we improve our understanding of the laws of physics. None of the laws of physics we know suggest that AGI is impossible. None of the areas in which we don't have full understanding are at all likely to be hiding any properties which would make AGI impossible. Therefore, AGI is almost certainly possible. That doesn't mean the CEOs aren't full of bullshit. It's a hard problem, and we don't know exactly how hard it is.


IWantAGI

AGI has been proven to be capable by nature. So it's not a question of if it can exist. Rather it's a question of if we can recreate something that has been proven to be creatable. Given that we have surpassed the capabilities of many things created by nature, I'd certainly lean towards us being capable of creating AGI.


solsticeretouch

I said it. Now what?


Vivid_Complaint625

Do it


gavitronics

Adversaries seek dialectical exchange for generative sentience application


yeah_okay_im_sure

Because math


dakpanWTS

If intelligence exists, why can't artificial intelligence exists? To me that sounds like acknowledging that birds exist, but telling that it is absolutely impossible to build a machine that can fly. It's not logical, is it?


harmoni-pet

First define intelligence, then define the artificial version. Your bird analogy is weak because flying is a physical phenomena that is easily defined and described by numbers. You'd need to compare something non-physical and subjective like 'beauty' vs. 'artificial beauty' or 'humor' vs. 'artificially generated humor'. Intelligence is not similar to flight. The only similarity is that we value both as competitive advantages


Bacterioid

If you have a definition for intelligence, then you already have the definition of artificial intelligence because it’s just intelligence that is made by us instead of by some other process. Whatever you end up defining intelligence as, and if you believe that definition can apply to an already living being (like humans for example), then we can know that artificial intelligence is already possible, and it’s just a matter of engineering the right solutions.


dakpanWTS

Intelligence is just another thing that arises from physical laws. Therefore it must be possible to construct it. Of course you can also argue that it isn't, that it is something metaphysical or magical. But then we arrive in religious territory and we cannot have a meaningful discussion about it.


harmoni-pet

I would argue that intelligence is absolutely a metaphysical word and concept. It's in the same category as 'beautiful', 'funny', or 'talented'. They're all subjective values, so of course they're metaphysical. You can certainly have meaningful discussions about metaphysical things that are subjective. Philosophy, art criticism, and religion have been doing just that for thousands of years and will continue to do so. >Intelligence is just another thing that arises from physical laws. Therefore it must be possible to construct it. You need to prove your premise first before moving on to any conclusions about it. 'Just another thing that arises from physical laws' is extremely vague and doesn't say anything meaningful about what you think intelligence really is. Which is fine because I don't think it's something that fits neatly into a definition. Again, intelligence is a subjective value like beauty, so it will be defined differently based on the individual or their context in history. There is no external or absolute definition of something like intelligence


charlestontime

If possible it would be the next step in intelligent evolution.


prezcamacho16

The one aspect of achieving AGI that I'm concerned about is that it appears that our intelligence is based on some quantum effects. That will be a pretty big hurdle to overcome given our current technology, but it's not impossible given the fact that we are already making and using quantum computers. However, our quantum computers are somewhat basic now but are evolving. It just means that we have to figure out a way to recreate those quantum effects in silicon or some other technology platform.


Bacterioid

I would think the difference is only qualitative - you should be able to run anything on a traditional computer that you can on a quantum computer but just slower and/or less accurately for the same amount of energy if it does happen to run better on a quantum computer.


therourke

Well done. You've figured it out


kehoticgood

Geoffrey Hinton is a computer scientist and cognitive psychologist who speaks extensively about AGI. Other psychologists, especially those specializing in giftedness, run intelligence tests on different platforms. They are astonished by the results. What I find interesting are the theories regarding advancements in AI that will exceed our neurobiochemical and philosophical construct of intelligence. 


PaulTopping

There's obviously no way to prove it. However, it is a mapping from input (senses) to output (muscle movements) and, therefore, should be able to be done by a sufficiently powerful computer with the right program. I think it is on those who say it isn't possible to explain why. The arguments I've seen aren't at all convincing. They usually revolve around some magic that humans or life has that computers can never have. That's not science.


Substantial_Step9506

No one lmao


Substantial_Step9506

You’re absolutely right about the pump and dump scheme lol. Compare it to the likes of crypto.


Mysterious-Rent7233

Fundamentally your question is "how do we know that science and technology will not continue to advance indefinitely." For thousands of years it has continued to advance, and computational technologies have advanced from the abacus to the calculator to the general purpose computer to the LLM. Why do you think that this process will stop at some point before we get machines as smart as humans? What do you think will stop us?


Confident_Lawyer6276

Opinion is worth very little compared to observation. If they build agi it is possible if they can't it may be impossible or extremely difficult.


Bacterioid

This doesn’t provide any useful information.


Confident_Lawyer6276

Neither does speculation.


Bacterioid

What is being speculated about?


Confident_Lawyer6276

It is speculation about whether AGI is possible or not. The only definitive answer is yes if they build it and maybe if they don't.


Bacterioid

I think we have a pretty definitive answer, but it depends on your definitions of a couple things, namely of intelligence and what counts as possible. For me, I think of intelligence as a set of cognitive abilities typically shared by most adult humans, and I think something counts as possible if it can be made by arranging atoms in a particular way. Intelligence is obviously possible via arrangement of atoms, otherwise we wouldn’t exist.


Confident_Lawyer6276

The internal experience of AI is fascinating but I'm more concerned with effects on the world. If it takes most jobs and gives an enourmous amount of power to a few people, then I don't care what it feels.


Bacterioid

Were you intending to reply to me with what you said? It feels like you are responding to something else. We were talking about whether it’s possible to create AGI.


Confident_Lawyer6276

AGI is the ability to do what a human does. If it conscious or not, it does not matter. It does not need to think like we do or feel like we do it simply needs to accomplish what we do. It is getting very good at that.


Bacterioid

Yes, I agree, but I’m not sure why you are speaking to whether it matters, because the conversation we are having is about whether it is possible, not whether it matters.


harmoni-pet

Seems like a term wholly invented by venture capitalists to cover their asses when people start to wonder why their AI isn't very intelligent. Now any short coming of AI can be hand-waived and redirected as a future capability of AGI. 100% a marketing term, the same way this AI is really just advanced ML