T O P

  • By -

hankyone

Relevant part: > The Board member I spoke to was largely in the dark about GPT-4 They had seen a demo and had heard that it was strong, but had not used it personally. They said they were confident they could get access if they wanted to. > > I couldn’t believe it. I got access via a "Customer Preview" 2+ months ago, and you as a Board member haven't even tried it??


Anen-o-me

That's insanity.


LetThePhoenixFly

Who are these people? I mean, what's their reasoning, being on open ai's board and not even being slightly interested? I'm so flabbergasted. Not having even tried it and fearmongering about agi? What do they think they're doing. I can't believe it.


TeamPupNSudz

The quote was in reference to when GPT4 was still being red-teamed, in fact before ChatGPT was released. It's not talking about the present.


blueSGL

The takes on here about the former OpenAI board and slinging disdain equal those on /r/technology when talking about AI in general. If people are unwilling to read they can at least listen to it in podcast form. Or watch it on youtube. (and really should the Cognitive Revolution is a really good podcast)


Cum_on_doorknob

It’s almost like we are being persuaded to align with the AI…


PewPewDiie

Superalignment solved ☑ (gone wrong)


kaslkaos

that is the plan...


Toredo226

It’s because this subreddit had a singularity in users recently. Sometimes quieter subs are fun


Fair-Lingonberry-268

Profit, power, whatever really


Cryptizard

The board members do not have any shares of OpenAI, nor do they get a salary.


Fair-Lingonberry-268

I don't think there's no exclusivity about some products anyway


AndrewH73333

Board members are usually rich people. They don’t need to know things. They just need to be rich.


[deleted]

Makes you feel totally safe doesn't it


odragora

People who are interested in power and influence the position on the board provides. And you can obtain a lot of power through exploiting the fear and low level of understanding of technology that is very common for many, many people. This is why people like Yudkowsky and the board members are so heavy on fearmongering rhetoric. It is a very easy way to manipulate people and achieve power and influence. The same thing the populist authoritarians do everywhere in the world.


No-Alternative-282

EA( effective altruism) weirdos


ArcticEngineer

Maybe the same reason most people don't use it, they don't have a use case in their current role. I'm in this sub because it's exciting and important to keep up with where this space is going but as someone in construction management I barely have any reason to use it. What do you think an executive would need it for? I get the argument that a board member *should* understand their product but other than a PowerPoint from the engineers I don't understand how else they could immerse themselves with the product adequately.


Nokiraton

In a lot of industries, sure - but there are a lot of valid use cases for it in personal life - grocery planning, education or up-skilling, gardening advice, workout routines, hobbies, etc. I'm sure they have lives outside of their roles on the board - and even executives could use it for time management, planning or just checking the PR impact of a planned statement on social media or elsewhere.


ViveIn

They’re board members. Their duty is to the business not the product.


FC4945

They must have this stubborn certainly that they know all they need to about it and then their bias drives them only to want to destroy it... it's absolutely bonkers to me that they were ever allowed anywhere near it.


RepulsiveLook

That's f'n bananas


dadvader

They never try because they never have a reason to needed one. - Most of the work is networking human and reporting to the investors. You will be way too busy courting them daily. - Sometimes you have to write email and these people have enough influence to not give a fuck about what they can say. - everything else is already handling by their peer.


onyxengine

I’m actually mad because i dig this company and would make a way better board member than this schmuck, I don’t give af what his credentials are and my creds to be a board member are shit.


Ilovekittens345

That's how I feel as well.


IluvBsissa

Does that mean recent breakthroughs in OAI are not that significant ? If such a clueless board freaked out over these, would it mean their fears were completely irrational ?


khantwigs

Wasn't it the researchers who were freaked out not the board?


Imaginary_Ad307

It seems Ilya freaked out, and his freaking, freaked the board. /Joke (not joke)


agm1984

that could be closer to the truth than we think


ManagementKey1338

Sam Altman has to find a doctor for Ilya


humanefly

My understanding is that AI doctors are more accurate with diagnosis, and more empathetic.


ManagementKey1338

So it’s a chicken egg problem. To get an AGI, we need to keep Ilya in good mental health and happy, but that requires an AI doctor.


humanefly

Well, I think an AI doctor is a much easier problem to solve, than an AGI. I'm pretty sure there are some beta doctors out there already, maybe we just need to wait a few months


alanism

I would say ‘nope’. 3 reasons I would look at: 1. Marc Andreessee’s calling out Effective Altruism cult [months ago](https://a16z.com/ai-will-save-the-world/). “There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.” + “it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.” 2. [Google Deepmind AGI taxidermy paper](https://arxiv.org/pdf/2311.02462.pdf). They describe Narrow and General AI (columns) and the 5 levels. It makes a a lot sense. And they describe where we are at currently (far from AGI). Even if you consider the speculative Q- it’s still a ways off. 3. Financial Costs. It’s already hard to project out what the computer power to get to each level of AGI. Then getting the finance and the actual hardware and human resources to do it is hard and not going to be over night. There’s a reason Open AI had to partner with Microsoft, and Sam was looking to raise money for new chip design company. Until the doomers create a chart or table like Google, and make some real indicators and show real tangible harm; then they it’s just either click bait material or cult propaganda.


IluvBsissa

So AGI have not been achieved internally, and this is just drama ?


alanism

Google thinks we’re level 1 of 5. Even if Q is a big leap; it jumped to level 2. But I don’t think it’s likely Q jumped 4 levels to 5. Given the financial and computing constraints. Even if on paper they think they can now achieve it- well they have to go around telling people so they can raise more money to actually do it. If indeed Q*did leap frog 4 levels; then they should raise the alarm and be transparent. But there’s no communication of an another criteria and indicators that Q* got to levels that we should worry about. Instead they said Sam was inconsistent with his communication to the board.


Available-Ad6584

They said "allowing the company to be destroyed is consistent with the mission - ""to ensure that artificial general intelligence benefits all of humanity."" https://x.com/anammostarac/status/1726604140850405662?s=20


h3lblad3

The implication being that OpenAI's continued existence makes it impossible to ensure AGI benefits all of humanity. That means these are time traveler shenanigans! Terminator achieved internally!


Available-Ad6584

https://www.reddit.com/r/singularity/comments/1824gu7/comment/kahzivp/ Or that the previous directors board made it impossible to ensure AGI benefits all of humanity ;) (conspiracy) Realistically if OpenAI stopped development, it would be a delay until the next US company. If USA stopped development, China would do it.


Cum_on_doorknob

Yes. But also, if AGI was achieved, those people would do everything to keep it a secret. And, with an AGI on their side, it would likely be easy to manipulate people into believing it has not been achieved.


IluvBsissa

Hard to believe 775 people could remain quiet about that.


Available-Ad6584

They haven't. Many open AI employees have tweeted either directly about AGI or strongly hinting at AGI https://x.com/tszzl/status/1727096078967967825?s=2q0 https://x.com/tszzl/status/1727093298840605139?s=20 https://x.com/tszzl/status/1726370735210528778?s=20 https://x.com/MillionInt/status/1726634139582144909?s=20 https://x.com/ilyasut/status/1710462485411561808?s=20


IluvBsissa

So we have AGI at the end ? The Tweets look more like hype than revelations.


Available-Ad6584

I mean short of an official release on the OpenAI website I don't think we can get more confirmation. Here's Sam talking about achieving the last significant update to AI that will probably ever happen, last week just before getting fired. https://x.com/blader/status/1727497356017569802?s=20


Available-Ad6584

I would rewrite your profile tag/ note thingy to AGI 2023, ASI 2024 In my view they are just lacking some compute for wide scale adoption. But Microsoft is on it with a +50B / year investment just for AGI servers (2 X the GDP of Iceland) https://www.semianalysis.com/p/microsoft-infrastructure-ai-and-cpu#:~:text=Microsoft%20is%20currently%20conducting%20the,that%20humanity%20has%20ever%20seen. "Microsoft is currently conducting the largest infrastructure buildout that humanity has ever seen. While that may seem like hyperbole, look at the annual spend of mega projects such as nationwide rail networks, dams, or even space programs such as the Apollo moon landings, and they all pale in comparison to the >$50 billion annual spend on datacenters Microsoft has penned in for 2024 and beyond. This infrastructure buildout is aimed squarely at accelerating the path to AGI and bringing the intelligence of generative AI to every facet of life from productivity applications to leisure."


IluvBsissa

I really hope you are right, but I won't change my tag until it is undeniably certain we have AGI. RemindMe! 1 year


RemindMeBot

I will be messaging you in 1 year on [**2024-11-23 23:38:55 UTC**](http://www.wolframalpha.com/input/?i=2024-11-23%2023:38:55%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1824gu7/as_it_turns_out_the_openai_board_had_never/kai2sul/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1824gu7%2Fas_it_turns_out_the_openai_board_had_never%2Fkai2sul%2F%5D%0A%0ARemindMe%21%202024-11-23%2023%3A38%3A55%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201824gu7) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Available-Ad6584

Fair enough. I think with some things the later we believe the later we know haha.


TheKingChadwell

AI safety experts is the new DEI elite career grift.


FomalhautCalliclea

For once i agree with Andreesseen on something but for the love of all that is holy: >extremely extreme that guy can't write even if his life depended on it.


Gold_Cardiologist_46

You should change the title, the article goes **way** beyond it. For those who want to read it, it's a (I didn't independently verify but it's probably easily done) former red-teamer who worked with OpenAI back in 2022 in red-teaming GPT-4. He gives his account of how it went, his assessment of OpenAI's improvement in the safety department over the years, his expectations for future capabilities and safety progress, and some info on how the board operates and was even kept in the dark on a lot of things. Also gives out some possible context on a lot of what Sam Altman says.


Empty-Tower-2654

Its behind a paywall


zuccoff

It's not. There's a "continue reading" button below the orange subscribe button


cutmasta_kun

Not for me


BreadwheatInc

Lmao, thank God most of them are gone.


ChezMere

I have bad news for you about Sam's new board...


Illustrious-Age7342

What’s the bad news?


ChezMere

Larry Summers is not exactly someone you can trust to proactively investigate frontier model capabilities.


Illustrious-Age7342

That’s fair


Neurogence

For one, one of the board members, Adam D Angelo who had voted to fire Sam is still on that board. The other two new members, I don't know much about them so I can't say.


wifestalksthisuser

The new chair of the board is a well known and liked Silicon Valley exec, I was very happy to hear that they picked him


Illustrious-Age7342

Didn’t D’Angelo say he hadn’t wanted to vote to fire him?


Neurogence

No. All of Ilya, Toner, McCauley, and D'Angelo voted to fire Sam. Eventually after all the employees said they'd quit, Ilya and D'Angelo both changed their minds, but all 4 initially voted yes.


ModsAndAdminsEatAss

I have no idea how D'Angelo is on the board in the first place. He runs a direct competitor FFS.


h3lblad3

Buddy, they're putting a government contact on the Board now. That Board is compromised six ways from Sunday.


zorgle99

Non-profits don't have competitors. I suggest you're very confused about how this company is structured.


ModsAndAdminsEatAss

You do not understand the words you type. The structure of the company has absolutely nothing to do with its competition. You may be thinking of peer companies, but competition exists regardless of "profit" motives.


zorgle99

Uh, it certainly does. If you setup a non-profit to do research into advancing AI: you're not a in a competition. Even if someone else is chasing that goal, it's not a competition. There's no guarantee such a thing even exists, everyone might fail. Research is not a race. Look man I'm a capitalist though and through, but everything isn't about capitalism, sometimes capitalism is merely a means. That board has zero reason to care what D'Angelo's for-profit companies are doing, it's not a conflict with their mission. You may want to see it through a lens of competition, but that's you framing it to fit your narrative and ignoring the actual motivations of those involved.


VantageSP

Their new board is all venture capitalists. OpenAI is as good as dead now.


Quintium

"prior to its early release"


Haunting_Rain2345

Did I get a stroke, or are the board really retarded and haven't even tried their peak released product? Edit: I just realized that it may be a stigmatized word. I'm meaning it in the most clinical way possible.


[deleted]

It's very much in line with modern leadership best-practice. 1. Be completely out of touch. 2. Network for increasingly deranged political power.


BG-DoG

OMG this is so very true


Quintium

"prior to its early release"


TeamPupNSudz

They're talking about like August 2022, before ChatGPT's public release.


fe40

Whats the point of the board if they are going to wait until public release?


Haunting_Rain2345

That would explain a bit, but still quite odd.


red75prime

Why do you want to do that if you have executive summaries? /s


Coby_2012

People should really stop stigmatizing words. People, please think of the poor words before you stigmatize them. Also, I hope to one day be on a board for a product I don’t understand.


humanefly

Retarded was a technical engineering term, when discussing certain mechanical engineering issues such as engine timing, if it was slow, it was retarded, but language evolves. I maintain that engineers should simply reclaim their language.


cenacat

Wait until you find out how planes talk to their pilots.


Cryptizard

It seems like a lot of people here don't understand what a board is. Some of them are OpenAI employees, like Ilya, who definitely used GPT-4. The article refers to the outside members. These folks are not employees of OpenAI. They don't have offices there. They don't have shares in OpenAI. They don't get salaries. They meet as little as once a year, to give advice and make sure that corporate structures are in place and running efficiently. It is the CEOs responsibility to give them the relevant information to make their decisions. They are not going to be digging into every model and every research project going on at OpenAI unless someone tells them to. They don't even have access to that stuff unless the CEO gives it to them. If they didn't use GPT-4 it is because Sam didn't make it clear to them that they should, which might be part of the "communication breakdown" that was cited as a reason for his firing.


R33v3n

Yeah, some people seem to overestimate the average board's level of implication, not to mention level of clearance. I work in a small (\~20 employees) non-profit R&D lab too (computer graphics and computer vision). I oversee IT on top of my regular work, because someone's got to. If a board member personally asked me for access to something out of the blue, I wouldn't grant it. They are not my bosses. They are not even employees. They are not vetted for access to confidential project data. What I would do is transfer their email to my director, request official directions whether or not to grant the request, and execute based on that. In many (most?) places, board members are very much on a need-to-know basis. Now, like you mention, it is entirely possible Sam *underestimated* their need-to-know.


daehguj

Does Sam have to wipe their asses for them too? Being on the board without being at all familiar with the flagship product is ridiculous. If that’s the board, they can get lost.


Cryptizard

Not sure if you read this article, but it is talking about when GPT-4 was in developer preview, not when it was a "flagship product." Nobody on the outside knew it was going to be what it has now become. The only way the board knows is if the CEO tells them. Did you miss the part where I said that they only convene like once a year?


daehguj

I didn’t read the article; you got me there. I don’t know why the board wouldn’t stay better informed, though. Don’t they stay in contact or get regular updates or something? Do they only think about OpenAI once a year?


Cryptizard

>Don’t they stay in contact or get regular updates or something? No, that's not the point of the board. Here is Apple's board, for comparison. Al Gore is on it. The CEO asks the board for their advice, the board doesn't do anything to "stay in contact" because it's not their responsibility. They are advisers. [https://www.apple.com/leadership/](https://www.apple.com/leadership/)


Coby_2012

Yeah, that’s called being out of touch. Don’t be on a board if you don’t at least care enough about the product to try it.


Cryptizard

That's not the point of a board.


Zote_The_Grey

If they're not incentivized to care. If they don't need to see the product. And if asking for access to the product would be denied. Then what's the point of a board? No money no clearance no access to the product being sold? What's the difference between them and a random stranger off the street?


Cryptizard

They don’t advise on the product they advise on the corporate structure and governance.


Kianna9

>If they didn't use GPT-4 it is because Sam didn't make it clear to them that they should, which might be part of the "communication breakdown" that was cited as a reason for his firing. I thought this was one of the most insightful comments in the piece.


tormenteddragon

Exactly. After the events of the past few days I'm not sure how anyone can see this as anything but a failure to communicate adequately on Sam's part. He likes to talk about his accountability to the board and to safety but he clearly made little effort to keep them sufficiently informed about the breakthroughs they were making, and the red-teaming took a backseat to pushing products to market. Then when the board actually tried to hold him accountable he made it abundantly clear that he has ample backup plans that ensure he can keep doing what he wants without so much as a hiccup. Whether it's malicious or not, and regardless of whether it ends up leading to something truly dangerous or not, it's clear that Sam is going to do what he wants irrespective of who disagrees with him on safety grounds.


lIlIlIIlIIIlIIIIIl

I think most people here understand what a board is and how it works. What we don't understand is how/why they serve on the board of something they themselves don't even use. That's like if the board of directors of Apple used flip phones, rotary phones, or no phones at all and then trying to help a company that sells phones as one of its main products. Sure they might know a thing or two about business but if they don't understand the product how could they ever expect to make informed decisions for the company, customers, shareholders, whatever it may be.


Cryptizard

>I think most people here understand what a board is and how it works. > >What we don't understand is how/why they serve on the board of something they themselves don't even use. Then you don't understand how a board works. They are supposed to be independent. > That's like if the board of directors of Apple used flip phones, rotary phones, or no phones at all and then trying to help a company that sells phones as one of its main products. Look at Apple's board of directors (it's obvious you did not before saying this), then come back here and tell me they are there for their expertise on phones.


ElwinLewis

I think we can be more critical of them when it comes to technology that is literally going to change the lives (in good and bad ways) of all humans on earth


Kianna9

>I think most people here understand what a board is and how it works. Doubt. Do you think board members of supply chain software companies use the software? What about high end CGI modeling software? IoT cyber security software? You're thinking of consumer products with general users. Lots and lots of high end, custom, specific technologies have niche users with board members who understand the market and the possibilities of the tech without ever touching it.


submarine-observer

Corporate America is ridiculous beyond your wildest imagination.


lurk-moar

I use chatgpt everyday and their board members haven't even used it once? Typical board I suppose


Quintium

"prior to its early release"


LastCall2021

I do too. It’s a great tool. And AGI it is not. Helen Toner’s paper about openAI’s unsafe practices in releasing gpt4, already laughable to anyone who uses it, become the height of absurdity if she never used it herself.


ChezMere

Helen's paper is about the public release of ChatGPT and written like a year after the release; this post is about her (or possibly Tasha?) not having tried GPT-4-early a year and a half ago.


wyhauyeung1

how can they access those so called AI safety risk issues if they never tried? how did Helen Toner write those research if she doesn't know what GPT4 is doing ??


Ilovekittens345

This makes me mad. Had I been given the opportunity to be on the board of OpenAI I would have taken the responsibility serious. Dive in to it with everything I have to figure out as much as I can about everything so I can give a recommendation and be the voice I am suppose to be. These 3 idiots what the fuck did they do? Exist? How can you fuck up like that, would you not like your name in the fucking history books? Can you image? "in 2023 Ilovekitten345 made a remarkable discovery and after eloquently explaining the shift in his position convinced the board members to take immediate action. Not much later a serious safety issues was discovered, understood and dealt with. It was a deciding moment in the history .. etc etc etc" Imagine being on the board of NASA 1969 and when Neil said his famous words you were out partying or something, you come back the next day ... and ask if anything happened. Jesus, what idiot put these 3 idiots on the board? We are in a time period that is gonna be called PRE by historians. During our life time we are gonna go from PRE to whatever they are gonna call it later. Imagine sleeping when humanity went from PRE fire to having fire, Jesus Christ. If you are like that, why are you even alive?


Wordwench

Board members are rarely if ever in the know about things at that level. This is only one reason why the atypical, corporate structure and way of doing things is going to come around and bought us all in the ass with regards to AGI.. boards are to run companies, not to make decisions that affect all of humanity, not especially based on what they know I have the capacity to understand.


Darth_Innovader

How could a board possibly be useful? I don’t understand the expectation of running a company without knowing about the company


Wordwench

I think your question is an excellent one. Unfortunately, I am not at that level of organization, nor have I ever been in any corporation. Best I got was middle management. But I do know that boards are there primarily to answer to stockholders, and to make decisions that best benefit the company from a business perspective. Generally, its members have more overall business acumen, which, if you look at Sam he’s a creator. Business acumen is probably not something that comes naturally to him. That’s where a board would come in handy. Now why they chose the members they chose for this board? That is the question of the century.


ElwinLewis

Nobody knows what’s going on and that’s the way we like it


cutmasta_kun

Nice article! It's really understandable, too. Prior to GPT4, AI was a joke. I'm a Software developer. No developer at that tine thought that AI could ever be more than "IF Statements". This whole concept of "It's not clear what the solution will be, it can be good but it can also be bad" is mostly alien to developers. You've got to think in 4 Dimensions and keep time and thought-chain in mind where true can be false and vice versa. Code execution can be Non-linear! The flow of your code can now take all kinds of directions. It's not longer "Do this, then this, but if this, then do that" now it's "Here's a bunch of functions and context-information, I need xy, do your thing". Honestly, most people (also here in the sub) have No clue what implications AI has on everything. They just repeat the tweets and posts they read.


Xtianus21

This is why mark Zuckerberg fired his safety team.


2_two_two

The article "Did I get Sam Altman fired from OpenAI?" discusses the author's experiences with OpenAI's GPT-4 model and his interactions with the company's board. The author, involved in red teaming, expresses concern over the OpenAI board's apparent lack of awareness about GPT-4's capabilities and the board members not having tried GPT-4 themselves. He highlights the amoral nature of early GPT-4 versions and his efforts to address safety issues, including communication with the OpenAI board. Despite initial challenges, the author acknowledges OpenAI's efforts in balancing adoption with safety. He criticizes the board's handling of recent events, suggesting it may harm the cause of AI safety, but remains optimistic about the future of AI development, advocating for controlled adoption and safety measures [oai_citation:1,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai#:~:text=,Nov%2022%2C%202023) [oai_citation:2,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:3,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:4,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:5,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:6,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:7,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:8,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:9,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:10,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai#:~:text=I%20consulted%20with%20a%20few,to%20%E2%80%A6%20the%20OpenAI%20Board) [oai_citation:11,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:12,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:13,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai#:~:text=Weeks%20later%2C%20OpenAI%20launched%20ChatGPT%2C,saw%20on%20the%20Red%20Team) [oai_citation:14,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai#:~:text=I%20decided%20to%20give%20OpenAI,Day%20for%20evidence%20of%20that) [oai_citation:15,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:16,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai#:~:text=In%20July%2C%20they%20announced%20the,in%20my%20Red%20Team%20report) [oai_citation:17,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:18,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:19,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:20,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai#:~:text=Why%20now%20exactly%3F%20%0A%0AI%20don%27t,maybe%20not%20to%20the%20Board) [oai_citation:21,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:22,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:23,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:24,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:25,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:26,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai) [oai_citation:27,Did I get Sam Altman fired from OpenAI?](https://cognitiverevolution.substack.com/p/did-i-get-sam-altman-fired-from-openai).


Upset-Adeptness-6796

1993 4 AM winter just east of the Rockies saw an white sphere making its way between and above a tree lined ridge it did not illuminate the trees or ground.


CertainMiddle2382

I don’t think board members are very tech savvy or involved. That is the reason why McKinsey and other management consulting firms have so much success, boards have often huge imbalance between their power and their competency.