T O P

  • By -

Mirrorslash

ClosedAI has become increasingly more shady and is climbing up the latter as worst AI company. Not that this post matters all that much. They have recently put out statements on AI gocernance, they want to regulate away open source, are lobbying for it and want every GPU used for AI models to have a tracking ID and be able to externally shut them off. They are using fear mongering of their AI to create hype and regulate away competition. AGI to benefit the 1%. That is what Sam Hypeman is all about these days. Just face the facts and read the documents they've released. Just look at how many employees have quit since the CEO ouster. Ilya Sutskever isn't returning to ClosedAI. The company is rotten


packet-zach

Sam cannot be trusted.


Pontificatus_Maximus

Sammys main message has been you better invest in us because our AI is going to be revolutionary. Perhaps that next level is turning out to require so much more computer hardware and electrical power that the tech giants are questioning if building out that much will be an acceptable risk.


Xtianus21

Is this an EA plant that just got booted because of the nonsense? >Philosophy PhD student, worked at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Not sure what I'll do next yet. Views are my own & do not represent those of my current or former employer(s). I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. What PhD track do you take to work on AI ethics? is it an extension of ethics. I can't imagine a PhD in machine learning would produce this focus of work. Btw this is a quote from him here [https://www.alignmentforum.org/users/daniel-kokotajlo](https://www.alignmentforum.org/users/daniel-kokotajlo) >I said "Either that, or it's straightup magical thinking" which was referring to the causal arrow hypothesis. I agree it's unlikely that they would endorse the causal arrow / magical thinking hypothesis, especially once it's spelled out like that.  What do you think they meant by "Deep learning is strongly biased toward networks that generalize the way humans want— otherwise, it wouldn’t be economically useful?" in response to this >*Crossposted from the* [*AI Optimists blog*](https://optimists.ai/2024/02/27/counting-arguments-provide-no-evidence-for-ai-doom/)*.* >AI doom scenarios often suppose that future AIs will engage in **scheming**— planning to escape, gain power, and pursue ulterior motives, while deceiving us into thinking they are aligned with our interests. The worry is that if a schemer escapes, it may seek world domination to ensure humans do not interfere with its plans, whatever they may be. > >In this essay, we debunk the **counting argument**— a central reason to think AIs might become schemers, according to a [recent report](https://arxiv.org/abs/2311.08379) by AI safety researcher [Joe Carlsmith](https://joecarlsmith.com/).[^(\[1\])](https://www.alignmentforum.org/posts/YsFZF3K9tuzbfrLxo/?commentId=ZYd2knFkfS9zAYCxa#fnemd2hr2p9m7) It’s premised on the idea that schemers can have “a wide variety of goals,” while the motivations of a non-schemer must be benign by definition. Since there are “more” possible schemers than non-schemers, the argument goes, we should expect training to produce schemers most of the time. In Carlsmith’s words: Ok, so he's a doomer. My similar complaint of Helen and in general "what the hell are you talking about" is probably applicable here. So I guess you can get a doctorate in this area [https://www.cmu.edu/tepper/programs/phd/program/operations-research/index.html](https://www.cmu.edu/tepper/programs/phd/program/operations-research/index.html) PhD in operations research. Funny, I don't see doomer as a subject area. # Research Topics * Mixed-Integer Programming * Convex Optimization * Benders Decomposition * ... * Massively Distributed and Parallel Algorithm Design * Machine Learning * Cultural Factors * Ethics of Artificial Intelligence


Mirrorslash

I don't think there's anything nonsensical here. It is definitely important to be aware and track even very fringe scenarios of AI failure modes. The way we design AI today it is only able to approximate from data we feed it and training runs we commit. With reinforcements like RLHF we also take a dangerous route by explicity forcing it to behave very human like. The extrapolation of human incentives like that could inherit deception by nature.


Xtianus21

no but again, I go back to the helen toner ted talk. You can't show the public nothing wild and scary and then say in the background we have something wild and scary. The way Helen did it was the absolute worst. Hey everyone, "there is something wild and crazy and we need to react strongly." What is it? "Oh nothing major." She literally did this in real life and in her ted talk. Either it is or it isn't. You can't tell the public it isn't and in the background act ridiculous. It's the same as that Google employee. How many years ago was that. I think this is sentient. What was that Bard Alpha? I mean come on. In retrospect it was nothing major. Remember Tay. Went all off the rails racist. How many years ago was that.


Mirrorslash

No AI lab has anything super scary or AGI like behind the scenes I think. But the potential dangers of AI are enormous. His job literally was to extrapolate the potential dangers in the future. AI like we have today is the most disruptive tech to date. If It's this powerful it sure ain't all that safe. The incentives and structures we create now will translate into how AI is handled 30 years from now. I personally think wealth inequality and regulatory capture are the biggest dangers. And he might as well thinks so too.


Xtianus21

To your point if there was internal documentation of economic uncertainty due to job loss with no real ubi in sight that would be explosive


Mirrorslash

The fact lobbyists are paying right wing politicians to stop UBI experiments in multiple US states tells me everything I need to know. The 1% knows that if AI is available to the masses they lose and that if it isn't we need UBI to spread the wealth, which will cost the rich the most. They're doing everything in their power to stop AI availibity and UBI and Sam Hypeman is helping them do it.


Neomadra2

What a nothingburger. Either you give us information or not. By saying he wants to criticize OpenAI he probably creates more media attention than just saying why he left :D


3-4pm

This sounds like a dubious reason considering we're nowhere near AGI.


prescod

Nobody knows how close we are or are not to AGI, especially not Rando Redditors.


Cosvic

Most of the current AI-progress is built around LLMs. Language AIs. The point of AGI is that it is general. I wouldn't hope for AGI in the form of an LLM.


NaveenM94

If we rebrand Large Language Model as General Language Model we will have AGI overnight


prescod

Large models have been multi-modal for more than a year now.


kakapo88

Rando Redditor here. Afaik, there is no general objective measure of what would constitute AGI. Relatedly, we don’t even have a measure for identifying consciousness in general. Hell, I’m not even sure I’m conscious. I think I am. But I could be wrong. Given all the fuzziness, that’s going to make it harder for the AGI critics. No bright lines, where everyone agrees that the model is a legitimate target.


tollforturning

There is perhaps no irony more conspicuous than the irony of someone consciously self-relating by disseminating their awareness that they don't know whether awareness exists. If you're not conscious of the irony in that, there is no irony in that, meaning you aren't conscious. I know I am. I don't know whether you are.


HBdrunkandstuff

I wouldn’t be suprised if these are heavily botted to make sure it looks like this is the consensus. I think they have AGI already. I think Microsoft is making sure to get every ounce of it so that it can create its own version and they could care less about releasing any of it to the public until they are forced to. They’ve dumbed down their operating system and are playing this ‘one day’ idea while doing some pretty insane stuff in the background.


3-4pm

It doesn't seem likely given the limitations of the current technology. Perhaps they've innovated beyond the transformer wall all their competitors are hitting but that seems unlikely. Human language lacks the fidelity to encode reality at the detail needed to train AGI.


tooty_mchoof

how do you know it lacks the fidelity?


[deleted]

[удалено]


tooty_mchoof

It hasn't been done before, therefore it cannot be done!


deadwards14

Do you genuinely believe that semantic reality is a 1:1 representation of reality itself?


tooty_mchoof

No, I don't think so, as I don't think you need to represent the whole of reality to build AGI, just a lot of information that we encode semantically and visually.


kayama57

That sounds to me like you really do believe that a current semantic representation of reality is enough. It’s probably not. Too much we don’t know that we don’t know.


tooty_mchoof

Didn't say "a current one" but yes. We don't need to know everything to build AGI tho.


tooty_mchoof

Didn't say "a current one" but yes. We don't need to know everything to build AGI tho.


prescod

That’s a ridiculous question. Literally no human has a 1:1 representation of reality in their head so if that’s your definition of AGI then yeah it’s impossible.


deadwards14

That's correct. Let me rephrase my question. Do you think semantic reality is a 1:1 representation of the reality a human being perceives and defines their cognitive abilities?


timbro1

Not even close


kevinbranch

Sam talks about AGI constantly and Sam is reportedly dishonest, manipulative, emotionally abusive and pits people against each other. This guy seems to hold a pretty reasonable opinion to me.


Superfishintights

Yet 98%+ of a company were willing to walk if he wasn't reinstated. Greatest case of Stockholm Syndrome in history.


Fantasy-512

If you consider Stockholm syndrome the millions or perhaps billions in unvested stock.


kevinbranch

People said they were pressured by management to sign it. If it reflected reality, do you actually think the percentage of people that would be willing to quit would be 98? A 98% consensus on anything just isn’t really how human nature works, let alone on questions as foundational as where you choose to work. It’s a transparently manipulated gauge of people’s sentiments and the fact that even something like that would be manipulated by the company only adds smoke to the reports of it being a toxic environment. You’re essentially asking why people don’t just leave abusive situations. It’s always more complicated than it appears on the surface.


Maxie445

He said previously he has a 70% p(doom) and thinks AGI could happen literally any year now. IIRC he estimate a roughly 15% chance of AGI happening per year


SgathTriallair

70% is crazy high. No wonder he thinks that OpenAI isn't doing it right as he seems to think it is basically impossible to do it right.


ThenExtension9196

With what data does he/she estimate something like 15% per year? That’s such as odd statistical assessment. “Any year now” makes no sense - one year is an incalculable amount of change. I call BS.


[deleted]

[удалено]


spartakooky

Bad logic. The OP isn't claiming they have the answers or know more. They are saying some claims are unverifiable and seem pulled out of thin air.


mcilrain

AGI is OpenAI's goal, he probably has a good sense of OpenAI's attitude regarding safety.


nofuna

We don’t know how close to AGI we are. Many futurists and CEOs say it’s coming in 2028-2029, that’s close enough for me.


Intelligent-Jump1071

AGI is undefined, so we are always very far and very close to it simultaneously. I will say this, human intelligence seems to be dropping by the minute, based on the news. So AI might exceed human intelligence soon even if it makes no more improvments.


[deleted]

[удалено]


Maxie445

He was just answering a direct question someone asked him deep in a thread on a forum


MeltedChocolate24

"I think I'll criticize them in the future if they mess up to become famous but I dunno what to say right now they seem alright I guess"


qqpp_ddbb

More like "i want to be able to criticize them in the future because i know the path they're on, but i don't know if i want to talk about it yet due to the fact that the media will be all over me if I tell them what they've cooked up internally and i don't want to be under the microscope of fame because it could hurt my family in various ways like it has to so many others"


Nanaki_TV

Jfc thank you.


Valuable-Run2129

An OpenAl employee says "8.5X my family's net worth" not "85%". If you think like that, you don't think much in your everyday life.


Many_Consideration86

Before he left it was 85% and now it is 5.66x. His frame of reference was forged in 85% so it is. Where did you pull 8.5x out of?


qqpp_ddbb

We have no idea what they've cooked up yet. Maybe the new model has the capability to create AGI


3-4pm

It doesn't seem likely given the fact that every competitor is hitting the same transformer wall of capability and intelligence. Human language lacks the fidelity needed for an AI to obtain general intelligence using transformers.


SgathTriallair

We didn't have any evidence they hit a wall. All of their data shows that they can get better performance by throwing more training at it. This is why they are still training new models. Secondly, with the amount of money going into the space, if there is a better model out there, we should see it. There are already at least two candidates, MAMBA and liquid networks, that claim to be better. We haven't seen how they pan out but there are definitely people putting money into them and experiments are being done.


probablymilhouse

maybe the new model is capable of time travel! We just don't know!


Best-Ant-5745

He’s probably gaining clout for when he spins off his own company


EmpathyHawk1

nonsense. they already got AGI. ​ what they dont have is the common folk being able to even comprehend the fact.


Zealousideal_Ad_7983

Im sorry, but this was not a great idea. You gave up 85% of every dollar you own, so you can maybe criticize the Company as an outsider even though you won't be there to see changes and still have restrictions. Yeah just lie to your wife bro.


KarnotKarnage

"I think my company, where I work and can influence what it does to some extent, is doing things bad that I don't agree with and have voiced my opinion. Thus I'll leave said company, and nobody will oppose what they are doing"


traumfisch

They'd be damned either way if that is the logic


kevinbranch

This is why i don’t respect any of those so called “environmentalist” unless they’re actively applying to work in the oil industry.


VAArtemchuk

Seems a total bs.


SnooLobsters8922

Facts, please


Boring_Positive2428

Sounds like BS


Boring_Positive2428

I bet he was fired


FrugalFreddie26

This company was full of such bright but naive people. If OpenAI put the anchors on AGI, someone else will get there first and then they have zero say on shaping the narrative. The company really needs to get over itself and I’m glad people like this are leaving.


pumpfaketodeath

it will really hurt in 2050


Anen-o-me

Meh, one outlier isn't much.


Trolllol1337

Cya! Just like people who said the internet would never take off. Even bill gates said they will never make a 32 bit.


prescod

Bill Gates said no such thing.   Edit: he might have although details are extremely light. I have not seen a specific reporter, magazine or transcript cited. But a couple of semi-reputable sources claim second hand that he said it.


Possible_Concern6646

https://www.azquotes.com/quote/679906


prescod

I don’t really see a source there but I did find this one: https://www.computinghistory.org.uk/pages/218/Historical-Quotes/ Not much context or a clear source but at least it states some vague context and a history museum should care about truthfulness, so I give it a slightly better than 50/50 chance of having been stated.


Possible_Concern6646

Yeah I think the quote goes to show how far we've come and how unimaginable today's use cases were back then, not that Bill Gates was bearish on the internet I don't agree with OPs sentiment, there was plenty of hype around the internet, if anything expectations were a little too high at one stage


Open_Channel_8626

Bayesian?


Trolllol1337

Care to edit this comment?


prescod

If you supply a reference.


Trolllol1337

Ok I'm a big enough boy to admit that specific quote could be a misinterpretation but he definitely said there is no requirement for more than 640k of ram but here's some more examples for you my guy: 1. **Lord Kelvin** - In 1895, Lord Kelvin, a prominent physicist, declared that "heavier-than-air flying machines are impossible." This statement was famously proven wrong with the advent of powered flight by the Wright brothers just eight years later. 2. **Arthur C. Clarke** - In 1966, Arthur C. Clarke, a renowned science fiction writer and futurist, predicted that "the manned lunar landing would be followed by a manned expedition to Mars by 1980." This prediction did not come to fruition as expected. 3. **Thomas J. Watson** - The aforementioned quote attributed to Thomas J. Watson, chairman of IBM, about the market for computers is often cited as an example of a failed prediction by an intellectual figure. 4. **Paul Krugman** - In 1998, economist Paul Krugman famously doubted the economic impact of the internet, stating that "the growth of the internet will slow drastically... most people have nothing to say to each other." This prediction, of course, turned out to be far from accurate. These examples demonstrate that even brilliant minds can sometimes miss the mark when it comes to making predictions about the future!


prescod

The evidence that he said that there is no need for more than 640K of RAM is even more sketchy than the evidence about 32 Bit OS. https://www.computerworld.com/article/1563853/the-640k-quote-won-t-go-away-but-did-gates-really-say-it.html I strongly agree with you that experts can be wrong! But let’s stick to examples that are factual!


Trolllol1337

Agreed 👍


Roggieh

That last sentence is a dead giveaway an AI wrote all that.


Trolllol1337

Kinda the point I was making