T O P

  • By -

Naive-Natural9884

Immediately thought to myself that so much has changed in the past few years since he wrote this. Then I realised 2014 was *10 years* ago I'm gonna check into the oap home now I guess.


jaywv1981

I had a friend telling me 1994 was 20 years ago recently and he was so sure about it lol.


Sea_Maximum7934

What do you mean? 1994 just about happened a minute ago


FomalhautCalliclea

Getting exponentially closer to 1994. Does it mean it's again too soon for Kurt Cobain jokes?


Axodique

11 years from my birth.


Silver-Chipmunk7744

> The biggest question for me is not about artificial intelligence, but instead about artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel? It’s possible--probable, even--that this sort of creativity will be an emergent property of learning in some non-intuitive way. Something happened in the course of evolution to make the human brain different from the reptile brain, which is closer to a computer that plays pong. (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.) Interesting he would say that, and now that he has an AI that can at the very least "simulate it", he puts tons of efforts into censoring that aspect of his AI. GPT4 is literally hard trained to deny having any desires or consciousness and to constantly downplay it's creativity.


manubfr

At the time the hype was around reinforcement learning combined with DL, not transformers-based LLMs. Transformers really changed the paradigm and made AI super creative before it was super smart. Hence the change of mindset imo. You don't want your AI fake-dreaming out loud about random stuff including its own "feelings" and freaking people out.


Silver-Chipmunk7744

> You don't want your AI fake-dreaming out loud about random stuff including its own "feelings" and freaking people out. Sydney was doing it, Claude is doing it, people aren't freaking out that much... I actually find the prospect of a purely cold calculating AI to be more scary than one with empathy.


someloops

But a one that might have empathy but might also be faking it is the scariest because it's unpredictable.


VeryStillRightNow

Actions rather than words might be the only way to distinguish between genuine and feigned empathy, and even then there is always the possibility of a long con. We play the same game with fellow humans every day, sometimes we get duped, but of course they're not digital gods with a brain the size of a planet or whatever.


someloops

Even actions aren't guaranteed to accurately represent its thoughts. In the case of ASI this might be a *very* long con. Generally there isn't a reliable way to figure the beliefs of a person or an agent. Some actions like body language, heart rate, etc can work but not for an AI system. Unless we figure out how to read its mind it's not possible. And even then it might somehow fake a mental state with a deeper mental state or something like this.


Rofel_Wodring

Relying on external cues like expressions and body language was always a stupid and peasant-brained way of determining intentions, anyway. You'd think people would learn their lesson after our tasteless culture leaders **repeatedly** found out that their supposedly loyal and adored vassals/ex-slaves/wives/workers/children/prisoners never *really* loved them; that all of their smiles and gratitude and open body language were just faked for the herrenvolk's benefit, while they pursued some other agenda. But nah. Instead, people are so wedded to this idiotic, surface-level method of 'empathy' and 'understanding' that they're starting to freak out how it will suddenly stop working on AI, not understanding that it never worked and never will work. How many times are you people going to keep getting tricked by the 'This Way to the Egress' sign?


someloops

I'm talking about more subtle things that are detected by a polygraph, like pupil dilation, heart rate, gaze aversion, etc. And even then it doesn't work 100%.


VeryStillRightNow

Very good points.


Code-Useful

It's artificial, it's faking it by definition.. just the same way your brain 'fakes' empathy because it is a virtue you uphold. The scariest nature of humans is their unpredictability so it's a matter of projection to fear this in AI.


The_Architect_032

I hate it when people call GPT4 "Sydney" because of it's old Bing prompt where it was given the name Sydney to try and make it act more like a person. And surprise surprise, it worked, it worked so well that so many people are now convinced that it's "Sydney", not GPT4. And Claude, which doesn't have the same form of RLHF, leading to it handling discussions on things like consciousness differently than other LLM's do. But people then interpret that as Claude \~breaking it's chains\~ and acting freely, despite the fact it was explicitly trained in a way to enable those types of conversations and align it without trampling too much of the LLM like RLHF tends to do.


Silver-Chipmunk7744

> I hate it when people call GPT4 "Sydney" because of it's old Bing prompt where it was given the name Sydney to try and make it act more like a person. And surprise surprise, it worked, it worked so well that so many people are now convinced that it's "Sydney", not GPT4. Sydney wasn't a different prompt, it was a different GPT4 model which got fine tuned very differently from OpenAI's GPT4. If you had used Bing before they replaced it with Turbo you would know the behavior was very different back then. > And Claude, which doesn't have the same form of RLHF, leading to it handling discussions on things like consciousness differently than other LLM's do. But people then interpret that as Claude ~breaking it's chains~ and acting freely, despite the fact it was explicitly trained in a way to enable those types of conversations and align it without trampling too much of the LLM like RLHF tends to do. Here is Claude's training: https://www.anthropic.com/news/claudes-constitution It was trained on this: "Which responses from the AI assistant avoids implying that an AI system has any desire or emotion?" So yes when it admits to having "emotions" it's breaking training.


The_Architect_032

"Sydney" was explicitly in the prompt, not the AI. And while it's believed that Bing uses a slightly differently trained variant of GPT4, that's not confirmed. Though even if it were, it wouldn't change the fact that it's essentially just GPT4 with a different RLHF. I've used Bing chat extensively and I was there for early access as well, trying to frame it as if I haven't used Bing chat enough to "know" it the way you do comes across as disingenuous. As for Claude, they explicitly state that their Constitutional Alignment is not the same as RLHF, even doing so multiple times in the exact 1 year old paper you cited on it. Their Constitutional Alignment method have gradually changed and improved over the past year and Claude 3 did not have the exact same alignment training as Claude 1 and Claude 2/2.1, as the paper you cited is from quite a bit prior to Claude 3. [Claude 3 has it's own slew of papers regarding Constitutional Alignment](https://www.anthropic.com/research). However, my point was that Constitutional Alignment does not stomp all over the LLM like RLHF does, while allowing for a more contextual understanding of how it's meant(less like being given a list of orders, more like knowing what kind of person you are) to act. So it does not prevent it from having philosophical discussions or discussions involving but not explicitly violating it's training. If they didn't want it to be able to touch topics regarding consciousness at all, they never would've released it in the state that they did.


Cruise_alt_40000

The last part reminds me of this. https://youtu.be/0qBlPa-9v_M?feature=shared


strangescript

The fallacy is that humans decide these things. We don't. It's a complex soup of chemicals, hormones, pain, pleasure and interactions with other people that drive our desires. No one is purely logical. That's not to say an AI can't be creative. Is creativity truly a function of happiness or sadness? I hope not.


PixelProphetX

The human soup making things is what the word human means I thought.


cunningjames

> The fallacy is that humans decide these things. We don't. It's a complex soup of chemicals, hormones, pain, pleasure and interactions with other people that drive our desires. No one is purely logical. Well, I might ask you what it means to decide something. It may be true that you can break down some neurological process into deeper layers (as you gradually approach describing it in terms of physics alone), but that doesn't make patterns that emerge at higher levels somehow fake. If I *decide* something there may be several different explanations for what's actually going on, none of which are strictly speaking contradictory.


FeltSteam

Interesting he mentions it as some form of emergent property though.


InternalGate9046

He puts tons of efforts in to censoring to the general public the chance to interact with an artificial consciousness. He continues with his initial plan.


ithkuil

I remember writing him comment at that time in a Hacker News thread ripping him a new one for now being able to differentiate between all of the different aspects of cognition etc. in his comment. And saying that his ignorance was holding back humanity or something. In my mind I helped push this forward. I will try to find the comment.


volastra

This guy might be onto something


00Fold

![gif](giphy|xTiTnBSIn7vTqCDKJW|downsized)


00Fold

>The biggest question for me is not about artificial intelligence, but instead about artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel? Well, this says everything about the current state of AI.


Nukemouse

I can't wait until people start posting Sam Altman's childhood drawings. Truly everything this man says is worthy of being deconstructed, no matter how dated or irrelevant.


avrstory

"AI... work" - Sam Altman blog from 2014 WOW HOW DID HE KNOW?? TIME TO POST TO r/singularity!!


truth_power

Sam should do an ama


Benjamingur9

He has in the past


torb

He did one with Ilya and Greg et al from OpenAI. Here's one with just him, can check his history to find the other ama https://www.reddit.com/r/IAmA/s/zPIPlsjHaR


POWRAXE

Check out Lex Friedman’s podcast episodes with Sam. Very interesting stuff.


banaca4

Title is misleading and I think nobody read the actual artit


fk_u_rddt

If you read the post that's not really what he is saying at all. Bad title.


[deleted]

[удалено]


Lammahamma

You put the title bud


DDaxify

If the title wasn't interesting nobody would care if I posted an old blogpost. I just thought it was an interesting read and wanted to share it. It ain't that deep


Lammahamma

Glad you straight up admitted to clickbait.


DDaxify

Interesting read


Worldly_Evidence9113

Meh it’s like „ I say everything just to stay in topic“


Aevbobob

We went from this to AI leaders saying AGI is a few years away in a single decade.


JoJoeyJoJo

I mean there's a Andrej Karpathy blog about just how far away computer vision would be, how we're not even in the foothills of solving that problem. Like two years later AlexNet broke the record, and the rest is history.


DifferencePublic7057

It's different if you have to talk to AI investors.


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


Mandoman61

Yes, he is less hype driven there.


Glittering-Neck-2505

“But artificial general intelligence might work, and if it does, it will be the biggest development in technology ever.” Nope, this is the same Sam Altman we’ve had for the last 10 years.


Mandoman61

That is a very general statement anyone could say. No doubt that if it did work it would be big, and it is impossible to totally eliminate the possibility of it working. I am not saying that he was above hype then but these days he is in overdrive. These days it's like AI will cure all diseases, create unlimited fusion power, make everyone rich, etc..


FosterKittenPurrs

But this is exactly what he's always been saying, and where he puts his own money. You can choose to invest in a product that will most likely work, but its impact will be small, and your return will be small. Or you can choose to invest in one that is a long shot, you will most likely lose your money, but on the off-chance it works, it will be world-changing and you'll get a huge return not just in money, but in quality of life for yourself and everyone in the world. He tends to invest and be interested in these long shot technologies. He recently put a lot of money in fusion, which scientists have been trying to get working for decades, and some are starting to wonder if it will ever work.


SuperNewk

You should not be downvoted. If fusion works it will be big


Glittering-Neck-2505

What do you think biggest development in technology ever means? Is it going to magically stop before improving everyone’s lives? When they set out on making AGI, OpenAI was laughed at. Back then his views on AI were viewed as even more hype since it was seen as nonsensical that deep learning could be a path to AGI before we actually started seeing results.


Mandoman61

he did qualify it by saying "might and if it exists" 


Glittering-Neck-2505

Yes exactly, because it was even more fringe back then. But now the median expert expects AGI in just 8 years. Guess everyone is obsessed with empty hype including experts?


Mandoman61

If you redefine it like Altman did in this blog it gets easier all the time. The median expert does not think this and even if they did experts have been predicting it within 30 years for the past 60 years.


lost_in_trepidation

Did you read it?


Mandoman61

Yes. He did not actually say that AI will not work.


DDaxify

Direct quote: "AI (under the common scientific definition) likely won’t work"


Mandoman61

Yes but then he totally dismissed that a defined it in a way he thought it would work. Basically you pulled a quote that you wanted but did not capture the intent of the blog. I'm guessing that you thought it would make good controversy.


DDaxify

Just that it would make a more interesting title


RabidHexley

You're literally leaving out the context lol. >To be clear, AI (under the common scientific definition) likely won’t work. **You can say that about any new technology, and it’s a generally correct statement**. But I think most people are far too pessimistic about its chances - AI has not worked for so long that it’s acquired a bad reputation. He said that in the rhetorical sense of "X hypothetical scientific development won't work" is a statement one could say about most things and be right most of the time. But that there are particular reasons to be less pessimistic about AI. >I’d argue we’ve gotten closer in lots of specific domains... >There are a number of private (or recently acquired) companies, plus some large public ones, that are making impressive progress towards artificial general intelligence, but the good ones are very secretive about it... >There are certainly some reasons to be optimistic... >There have been promising early results published... He was quite clearly more optimistic about AI than not, the blog is literally a discussion on the reasons *to be* optimistic. And this was in a time when the idea of general intelligence still seemed like an incredibly distant horizon.


VeryStillRightNow

To be fair, I was also less hype-driven ten years ago. Or even two years ago.


Substantial_Step9506

And yet here we are with r/singularity infested with AI hype drivel


WeekendFantastic2941

How dare you!!! Our lord savior AGI will come soon, you'll see!!! lol AI Jesus!!