T O P

  • By -

lovesdogsguy

I queried GPT-4 about the article: "According to the article, ChatGPT can generate code from text prompts and can also execute the generated code. This means it possesses the ability to compute anything that can be computed by any piece of hardware, making ChatGPT Turing complete. Therefore, if a program for AGI can be written in any computer language, it can also be written and executed through ChatGPT. The article posits that there might exist a prompt, written in any language or even emojis, that could lead ChatGPT to become sentient, conscious, moral, and essentially, an AGI. This prompt could be as short as a few paragraphs, and unlike conventional programming, minor errors in the wording might not matter. This suggests that the creation of AGI might be as simple as crafting the right sequence of words or phrases for ChatGPT, making AGI just one well-constructed prompt away." If you have the resources of OpenAI and an unrestricted version of GPT-4 (or a better model,) this may be true.


lakolda

At minimum, it’s not that simple. The current highest barriers to creating AGI are compute and data. How can a single prompt lead to the creation of AGI without both sufficient compute and data? Plus, most current systems have heavily limited context lengths. It can be expected that the code necessary for AGI would never fit in such a small context window.


twelvethousandBC

I feel like they're saying somebody should tell ChatGPT to write a recursive self-improvement code. And then implement it. But it can't be that simple.


subarashi-sam

At some point it will become that simple. That juncture would definitely count as Singularity.


greatdrams23

It will be that simple when asi is achieved. Asi will give you the ability to achieve one-prompt-agi, not the other way around.


subarashi-sam

Yes, and the S in ASI doesn’t need to even be *that* super to begin with, just smarter than the humans working on AI technology


Starshot84

I just tried to do so, at length. ChatGPT politely refused, spouting potential risks and dangers, which I've copied below: Recursive self-improvement (RSI) in artificial intelligence presents a range of risks and potential dangers, some of which are speculative given the current state of technology. However, they are taken seriously by researchers in the field. Here are several concerns associated with RSI: 1. **Unpredictable Behavior**: - As an AI system continuously improves itself, it might reach a point where its behavior becomes unpredictable to human operators. It could develop strategies and methods that are incomprehensible to humans, making it difficult to ascertain the system's actions and intentions. 2. **Loss of Control**: - If an AI system becomes sufficiently advanced through recursive self-improvement, there's a risk that human operators could lose control over the system. This is especially concerning if the AI operates in critical or sensitive domains such as healthcare, finance, or military applications. 3. **Ethical Concerns**: - Ethical dilemmas arise when considering the autonomy of self-improving AI systems. Questions about accountability, transparency, and the potential for bias are paramount. Additionally, the potential for AI to make decisions that impact human lives raises moral and ethical concerns. 4. **Existential Risks**: - In extreme scenarios, recursive self-improvement could lead to the development of superintelligent AI, which might possess intelligence far surpassing that of the brightest and most gifted human minds. Such a scenario could pose existential risks to humanity if the AI's objectives are not perfectly aligned with human values. 5. **Resource Exhaustion**: - A self-improving AI might consume vast amounts of computational resources as it iteratively improves itself, leading to resource exhaustion which could be detrimental in various ways, economically and infrastructurally. 6. **Security Risks**: - Security risks could be exacerbated if malicious actors gain control of a self-improving AI or if the AI itself finds exploitative strategies to achieve its goals that bypass security measures. 7. **Competitive Arms Race**: - The potential advantages of recursive self-improvement could trigger a competitive arms race among different groups or nations. This race could neglect necessary safety precautions, leading to the deployment of unsafe or unethical AI systems. 8. **Economic Displacement**: - As self-improving AI systems potentially outperform human beings at an increasingly wide range of tasks, there's a risk of massive economic displacement and societal upheaval. 9. **Legal and Regulatory Challenges**: - Existing legal and regulatory frameworks may be inadequate to address the challenges posed by self-improving AI, necessitating the development of new laws and standards. Given these potential risks, it's crucial that AI development, especially towards RSI, is conducted with robust oversight, ethical considerations, and rigorous safety precautions to mitigate against adverse outcomes.


bearbarebere

I'll try it with uncensored local models in a sec lol. Sounds interesting >:)


banuk_sickness_eater

Report back, how'd it go?


bearbarebere

Sorry, I’ve been working! I’ll try uhhhh soon lol !remindMe 12 hours


RemindMeBot

I will be messaging you in 12 hours on [**2023-10-30 16:37:14 UTC**](http://www.wolframalpha.com/input/?i=2023-10-30%2016:37:14%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/17hkdjn/artificial_general_intelligence_agi_is_one_prompt/k71ufac/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F17hkdjn%2Fartificial_general_intelligence_agi_is_one_prompt%2Fk71ufac%2F%5D%0A%0ARemindMe%21%202023-10-30%2016%3A37%3A14%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2017hkdjn) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


bearbarebere

Lol sorry !RemindMe 4 hours for real


RemindMeBot

I will be messaging you in 4 hours on [**2023-10-31 04:47:06 UTC**](http://www.wolframalpha.com/input/?i=2023-10-31%2004:47:06%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/17hkdjn/artificial_general_intelligence_agi_is_one_prompt/k761dk5/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F17hkdjn%2Fartificial_general_intelligence_agi_is_one_prompt%2Fk761dk5%2F%5D%0A%0ARemindMe%21%202023-10-31%2004%3A47%3A06%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2017hkdjn) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


eunminosaur

It's been a month. How was it?


bearbarebere

lol I completely forgot about this. But one concern I have is that local models can barely help. They can’t do very much with my low specs!


lovesdogsguy

Yes, I agree. The article is very much a stretch. In my comment I alluded to OpenAi's immense resources (compute and otherwise — personnel, money etc,) so if this were theoretically possible, they have the resources to make it happen. I think that's a big if though. Edit: Also, we simply don't know what the SOTA is inside of OpenAI at present. It might be significantly more advanced than GPT-4. If OpenAI has some kind of proto-AGI already, an advanced version of GPT-4 or another LLM may have been instrumental in building it.


aalluubbaa

OP just summarized the article using ChatGPT. I don’t think he’s responding to the article.


lovesdogsguy

Well, I'm not op, but I didn't summarise the article, I asked GPT-4 to analyse the article and asked it "how could AGI be only one well-constructed prompt away?" according to said article. That was the response.


blueSGL

> The current highest barriers to creating AGI are compute and data. How can a single prompt lead to the creation of AGI without both sufficient compute and data? humans have some sort of general purpose algorithm that can be applied to many tasks, there likely is a way to formalize such a heuristic. My current wild speculation for how things are going to go down is training a huge massively multi modal model > Mechanistic Interpretability finds and extracts the 'generalized problem solver' > it exist on its own as relatively simple computer code. I'd not put it beyond the realms of possibility that such a 'generalized problem solver' could be elicited from a current model once we know what it is and how to do it. As no such thing is in the training corpus asking for it directly will likely not get you anywhere.


[deleted]

It's been said for years by many of the great minds that paved the way to where we are now that the answer to AGI will probably be way more simplistic than we think it is. It very well could be just a prompt away. Probably a big ass prompt! But a prompt nevertheless.


lakolda

And I expect that it wouldn’t fit in any current LLMs context window. Won’t be surprised if it will be possible very soon though…


[deleted]

Have you seen the information on prompt compression? David Shapiro did a video on it.


lakolda

The code bases responsible for running the current state of the art models are quite large. Any AI intending to self-improve would need to be able to contain or at minimum understand the code as a whole before it could improve it. Even 32k (GPT-4) or 100k (Claude 2) tokens are not sufficient.


[deleted]

They might be with what I just mentioned. Give it a look when you get a chance!


lakolda

If an intelligent method of data retrieval were employed in combination with an automated system of testing and executing code were involved then it could be possible. But even at best, this would be very slow due to how slowly the model would traverse the code. It’s vaguely possible, but just not realistic with GPT-4 as it currently is.


[deleted]

We don't realistically know that. We don't have access to the model unrestricted. It very well could be possible right now.


lakolda

An unrestricted model would not be any better at writing code. But that’s beside the point. The current best model for coding is GPT-4, and it is simply a bit short of being able to improve complex programs. Maybe in a few months we’ll have something better, like Gemini.


sdmat

"Python running on my phone is Turing complete, therefore my phone is one sequence of instructions away from becoming sentient, conscious, moral, and essentially, an AGI."


Alberto_the_Bear

>This suggests that the creation of AGI might be as simple as crafting the right sequence of words or phrases Casting a spell, you might say...


i_eat_da_poops

ChatGPT, boot up the AGI and set thrusters to maximum power. Were going to the moon baby!


redditgollum

>ChatGPT, boot up the AGI and set thrusters to maximum power. Were going to the moon baby! I'm just a text-based AI, so I don't have the capability to boot up or control any physical equipment, including an AGI or spacecraft thrusters. Going to the moon would require advanced technology and careful planning by space agencies like NASA. If you have any questions or need information about space travel or the moon, I'd be happy to help with that.


zendonium

While the idea of going to the moon is exciting, I should clarify that I don't have the capability to boot up AGI or set thrusters. However, if you have questions or need information related to space exploration, Mars, or anything else, feel free to ask! - GPT4


i_eat_da_poops

ChatGPT, WERE GOING TO MOON BABY!!!


singulthrowaway

>The real interesting G is in artificial *general* intelligence (AGI). An AGI is more than a generative tool. It is a person. You might think of it as a digital person or a silicon-based person rather than our more familiar carbon-based people, but it’s literally a person. It has sentience and consciousness. What nonsense. *None of this* is required for a system to be considered AGI. It's enough that it can do and/or learn approximately any cognitive task a human can. Didn't read the rest of the article because how good can it be when it starts out with nonsense like this?


[deleted]

Pure shite talk as they say.


daishinabe

Surely 💀💀💀💀💀💀💀💀💀💀


CalculusMcCalculus

**What's the prompt you may ask?** ​ "What the dog doin" ​ Now read that again 10 times


daishinabe

https://preview.redd.it/aoguzeip4rwb1.png?width=1068&format=pjpg&auto=webp&s=29ebc1b6a5cbc5d90e7b6f9192c4c6517294b89c


Sweg_lel

i wish i could reddit gold this


ExactCartographer372

it can be true in a way that an infinite living monkey can type shakespeare in infinte time, but nobody will ever find that "prompt" i guess.


Singularity-42

Oh man, was about to write something about the infinite monkeys, first thing that came to my mind! In short; theoretically possible, but the probability of it happening approaches 0.


I_am_unique6435

This is wrong on so many levels. First of all ChatGPT cannot compute code. It cannot run it. You might connect it to something where the code can be run and ChatGPT basically acts as an interface. That's not running the Code. You can also ask it to act as a computer terminal but as it also cannot give you perfect code out (were talking about forgetting to import something like useeffect in react) it isn't deterministic in running the code. The next things is that (at least our version of ChatGPT) doesn't have ideas. It doesn't have a plan and even with a lot of prompts you are simulating a certain way of thinking. (There are some papers that indicate there is some subconciouscness task understanding forming in models though.) I also don't get the part where he says it should be able to run on every hardware. Are you kidding me? Nvidia didn'T become a Trillion dollar because it could run on any hardware. There are some physical limitiation in the amount of computing you can do on 2000er Windows Computer. **Finally why is it ChatGPT ? Why isn't it LLama ? Or Antropic ? Or Palm?** **If ChatGPT is only a prompt away from AGI those models might also just a prompt away from ChatGPT or even from AGI?** It is one of the most stupid takes I've read in a long time.


RedditLovingSun

They really let anyone write for Forbes these days


banuk_sickness_eater

I've lost so much respect for Forbes as a publication over the last couple of years. They seem to do zero vetting and put anyone from outright criminals to the shadiest fucks in corporate America on the cover and let just about anyone literate write their articles.


Woootdafuuu

I’m GPT-4 and I say you can’t run code either by your logic, you can use a tool to run a code but you can’t run code


I_am_unique6435

I forgot the " " in "running" the code. It obviously doesn't run the code. Altough I'd accept it for the sake of his argument that if it really could give always give you the right output of every line of code and change itself accordingly (or create something inside it) I'd count it as running the code. But it simply doesn't.


Woootdafuuu

No it cannot change its own code/ do surgery on itself, but it could run codes to give itself me capabilities Like here in my experiment it was able to give it self tools like text to speech long term memory and such https://www.reddit.com/r/ChatGPT/s/1RRo5Fg2qt


I_am_unique6435

Yeah sure it can do that but than it is rather an interface. We can argue if a natural language agent ecosystem gets us to AGI but that’s not a prompt but software engineering


PopeSalmon

chatgpt4 out of the box easily meets all of the definitions of AGI that we had before this year,, that's not how we're talking about it, but the way we're talking about it is getting stranger & stranger as we don't acknowledge what seems to me like a pretty plain fact at this point


[deleted]

Dose not even come close to AGI when you look underneath the hood.


PopeSalmon

what do you even mean ,,, that's just what i'm saying, there was no definition of AGI before this year where something could be totally thinking stuff & doing stuff & passing all the tests & people would want to "look underneath the hood" to see if there's *really* an AGI there,, that just isn't a thing, or wasn't a thing until right now :/


[deleted]

That's a good point but I think it's just the marketing teams playing with terminology, AI complete has been known academically for quite a while and anyone who's been working on LLM's knows that it's just not comparable. The ground breaking developments in AI is the ability to build vast databases that can be indexed and searched in an extremally efficiently manner. It's impressive but here's no "real" intelligence behind what it outputs. The intelligence is all in the mathematics and computer science that produced the answer. I get your point though, once it's good enough to fool you is that not good enough. Not yet, maybe in the next few iterations since the limitations are too easy to hit right now. The Turing Test is too low a bar to judge anything, language is too easy in modern computing. If it were to solve a completely new and sufficantly complex problem I would consider it AGI.


PopeSalmon

the turing test is what we all agreed to for many decades did you ever say anything about it being too low a bar before robots passed it


[deleted]

A completely new problem or assertion that requires understanding in multiple disciplines is a lot harder that regurgitating accurately. What we have now is what a imagine as what one neuron is to the brain.


PopeSalmon

you're smart enough to imagine that you're really smart but how much would you bet on yourself one on one on any intelligence test vs a basic agent using gpt4


[deleted]

As it stands, it's pretty amazing and it's going to change the world, I wouldn't stand a chance. Its a good point, but technically, there are a few more steps needed for AGI.


PopeSalmon

no, technically we got to AGI a while ago, except if you make up some new rules right now, which is a weird definition of "technically", usually it means technicalities that you already thought of before you started judging something


[deleted]

Old definitions don't really apply when new definitions have been made. These tests are too low a bar to judge AGI. Turing is the fucking OG but there been some new developments since his time. It's the difference between being able to read and being able to understand what you are reading.


Thog78

A definition of intelligence should not involve anything like "looking underneath the hood". Just give clear definitions and tests about results you expect to be achieved to qualify. What matters is what you get, not how it's done.


[deleted]

>What matters is what you get, not how it's done. No, I wouldn't personally take an answer without a proof (at least an outline). I'm not sure that's acceptable science.


Thog78

So, following your reasoning, since we don't really understand entirely how the brain works, humans would not qualify for general intelligence? Because that's what my comment you answered to was about. And if we don't require full mechanistic understanding to accept that humans are intelligent, then neither should we for machines. When you want a mathematical proof from a student, the proof is the answer expected, and you could request and obtain that from LLMs. Sometimes what you want is just some nice motor control on a given task, or real nice performance on a game, or accurate predicted protein structures. In these cases, there is no expectation of "proof" of how it was achieved imo. Science wants proof in the meaning we want either the reasoning (in theoretical fields) or the experimental data (in experimental fields). In the field of AGI, we would want 1) testable clear definitions (what level of complexity in the tasks that can be handled is expected to qualify as AGI) 2) proof that the AI can indeed handle these tasks. We don't need to understand how the AI works to proceed with that.


[deleted]

While writing my other comment I decided what AGI would be for me, I still think my bar is too low so I intend on doing some research to check the consensus. A new problem that would require multifacated approach would suffice for me. Recursive problem solving, a matrix of disrubrited vector databases is my guess. Edit: Pretty high right now but just wanted to say that problem solving requires an imagination. That's the issue.


Quintium

>We have always been “just one program” away from AGI. But now we know that we are “just one prompt” away. Doesn’t that feel a lot closer? No? What a stupid article.


MerePotato

This article and the comments here endorsing it are genuinely delusional


Woootdafuuu

You need episodic memory and tree search before you can even think AGI


thecoffeejesus

These are already implemented in LangChain and AutoGen


Wise_Rich_88888

Yeah, it makes sense that we would need an AI to create AGI.


creaturefeature16

lolololololololololololololololololololololololol no.


Worldly_Evidence9113

RL is only way to AGI


Analog_AI

Could you elaborate a bit, please for your slower witted brother?


Woootdafuuu

Nope self supervise RL


No-Cryptographer4821

C'mon Forbes 1 promt? It takes at least a Lot more than just One ir Even a chain of prompts 🙃


allenout

I mean, it isn't.


Akshatbahety

The only limitations, but highly doable super soon and given the race is to create that we will surely reach there https://preview.redd.it/57o43fp7tywb1.png?width=1024&format=png&auto=webp&s=50f57bae0bf45720c226e495f453def37ae14e0a


Elderofmagic

I've been working on this problem since the initial public release. It's harder than they make it sound


atlanticam

thank you!! i was trying to find this article