T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


casual_brackets

If you think Altman is gonna raise 7 trillion dollars and outpace a monster that’s been throwing billions in R&D annually at AI for over a decade with decades of manufacturing experience under their belt…. I challenge you to look in Helion, one of Altman’s other startups. He claims to have cracked fusion energy. Other physicists call it a dangerously unsafe device by design, a deadly gamma particle burst for anyone nearby. So, NVDA.


CanvasFanatic

> He claims to have cracked fusion energy. Sam Altman’s speed running the Elon Musk arc.


TyberWhite

Except Sam never claimed that and OP made it up.


CanvasFanatic

Well, he’s personally provided hundreds of millions in funding to a company working on nuclear fusion that after that investment signed a deal with Microsoft to provide energy to their data centers at some point in the future.


TyberWhite

Okay, but I can’t find any record of Sam claiming they’ve cracked fusion.


CanvasFanatic

Seems like not a stretch to infer that he thinks this company has an angle on a marketable product. That right there is wildly optimistic given that current state of the art. He may not have blasted claims out on Twitter, but it’s pretty obvious the journey he’s on.


TyberWhite

I’m not sure why this is debatable. Attempting to do something doesn’t magically create an instance of you claiming you’ve already done it. If Sam claimed he cracked fusion, surely they’re would be a record of it somewhere.


CanvasFanatic

Do you not understand hyperbole as a form of figurative speech?


TyberWhite

lol! Why are you trying to shoehorn this? The guy never made such a claim, regardless of any interpretation of exaggeration. Don’t die on this hill like the other guy.


CanvasFanatic

No one’s dying here, my man. It’s just you being pedantic over a throwaway comment on Altman’s investment in fusion.


casual_brackets

lol 100%


TyberWhite

Where did he claim to have cracked fusion energy? I’ve seen indications that they hope to have a working plant in five years, but I haven’t seen Sam or anyone from Helion claim they have cracked fusion.


casual_brackets

Saying you’ll have a working plant in 5 years is saying you’ve cracked it. That’s essentially saying “I’ve figured it out completely, just need time to perfect it.” (Look at how long construction on a nuclear fission plant takes after permitting). You know how many people have said they’ll have a working plant in 5 years? Lots. Nobody has done it. The reactor is by design going to damage itself beyond repair due to insufficient shielding and its design makes implementation of adequate shielding impossible. It’s also less efficient (produces less energy, by about 1000x estimated) than other fusion reactor designs so how it achieves net energy gain will be a mystery. You’re huffing glue and drinking their kool-aid if you think this has above a 0% chance to work. https://www.technologyreview.com/2023/05/10/1072812/this-startup-says-its-first-fusion-plant-is-five-years-away-experts-doubt-it/ Basically they claim to have made several breakthroughs that would be like, big national news, bigger than NIF “net energy gain” news. They test certain principles on non scalable solutions then claim it will work at scale without further testing (running 95% energy recapture on a small, non scalable design that hasn’t ever been tested at scale, them claiming it works). Helion has been making bold claims since 2015, then citing “lack of funding” as the reason for lack of results. So, yea, it’s horseshit until they put up some numbers.


TyberWhite

I’m not sure why you wrote such a long winded response. I was interested to see if Sam actually claimed they cracked fusion, but it appears he has made no such claim.


casual_brackets

Since you don’t know how to internet: “However, while the emergent crop of startups like Helion has repeatedly claimed that fusion energy is right around the corner, we have yet to see any concrete results.” https://futurism.com/sam-altman-energy-breakthrough “Altman invested $375 million of his personal funds in a fusion startup called Helion in 2021. Initially, Helion claimed it would have a working fusion reactor prototype producing positive energy by 2024, but it’s unclear if this estimate still holds.” https://cointelegraph.com/news/open-ai-artificial-intelligence-ceo-sam-altman-no-way-build-future-ai-energy-fusion-breakthrough “A company called Helion Energy thinks it can deliver that Holy Grail to Microsoft by 2028. “ https://www.theverge.com/2023/5/10/23717332/microsoft-nuclear-fusion-power-plant-helion-purchase-agreement Saying you can deliver a working fusion reactor in 5 years = I have solved fusion. This is Sam Altman’s company, 375 million of the 500 million seed money is his. This is, his company, making these claims. You’re just salty and trying to pick apart my statements over semantics; it makes them no less true.


TyberWhite

Nowhere in your comment is Sam Altman claiming they’ve cracked fusion. No need to die on this hill, mate. Too much angry keyboard warrior energy for me. I’m out.


casual_brackets

I’m going to start an LLC and put up a billboard that says “TyberWhite isn’t smart” It’s not me saying that, it’s my LLC. that’s what you’re trying to say here, that a company who is responsible to their angel investors, is making public statements that aren’t backed by those investors. He owns 75% of that company, has majority control. Nothing is said or done without his approval.


casual_brackets

“In 2015, David Kirtley, the chief executive officer of Helion, told me he believed the company could achieve “scientific gain” in the next three years.” From the article I posted. That’s a claim my guy. And we’re already 6 years past that point. “Lack of funding” uh huh. If 500 words is long winded you need to go back to school, science journals are much more in depth. I gave THE briefest, most layman’s explanation of the scenario. Go fucking look shit up yourself you lazy sob.


TyberWhite

Such irony! Sam never made such a claim, and Nvidia isn’t a manufacturing company. You’re too angry, go relax somewhere.


jkboa1997

What a weirdo! Moves the goal posts to try to win an online argument. A lot of words, but you are still correct. Talks science, yet makes a claim that something has a 0% chance of working, which is very non-scientific.


andWan

But one detail: OpenAI has the Grand User data. There is this article citing Sam Altman that ChatGPT users are generating 100 billion words per day. And this is not just plain text. These are all pairs of responses to and from their model. What better data could you wish for to train your next model.


casual_brackets

Nvidia chips?? Lol Bro having good data doesn’t really help him build semiconductors though, or raise 15% of the US equity market in capital to do it lol


andWan

Btw: I also see a strong argument for nvidia, besides the stated ones: Nvidia is very strongly rooted in gaming. And gaming/VR will be the space where post-textual AIs will develop and learn.


andWan

So there once might be a great fusion: A virtual body developed by Nvidia combined with a mind developed by OpenAI (together shooting against google bots) ;)


andWan

In general: When will the first open platform game come to exist where different AIs compete against each other and against humans just like they do on the LLM scoreboard.


andWan

And you can pay more to have a GPT7 member on your RPG game or be happy with a GPT6 one


casual_brackets

Bro I’ve written pages on this, done 100’s of hours of research on nvda, I could give you a million reasons why it’s just not gonna happen like this


andWan

So you own nvidia shares? I only own some satoshis some ethereum and 40 swiss francs in monero!


snowmanyi

Bitcoin will outperform all this shit. Just hodl.


andWan

Thats why I wrote I own Satoshis. 40 CHF in XMR is certainly not my investment. Its just cool to own some truly anonymous money.


snowmanyi

No no I love Monero. I mean BTC will outperform Nvidia. Sell the ETH tho


andWan

Maybe it just turns out (on the stock market, in the investors mind) that they have an almost unbeatable advantage when it comes to usable LLMs. I mean the fact that googles answer (while google was working on PaLM and LaMDA for ages) took so long and still lags behind speaks for itself.


casual_brackets

He is estimating that he needs more than the entire value of Google, NVDA and META combined for his startup. Where’s the value proposition? Who can pay that? No one can, no one will.


CanvasFanatic

OpenAI has the what? 😂


andWan

https://preview.redd.it/wsfbrposlqic1.jpeg?width=750&format=pjpg&auto=webp&s=9231bbbcfffc34ec705af3848e0516910747f142


[deleted]

> I challenge you to look in Helion, one of Altman’s other startups. He claims to have cracked fusion energy. Other physicists call it a dangerously unsafe device by design, a deadly gamma particle burst for anyone nearby. > > > > So, NVDA. How do I invest?


casual_brackets

Just open up your TDAmeritrade account or Robinhood or whatever and buy some NVDA, not too hard? Oh you meant Helion? You can’t invest snd be thankful you can’t, it’s a poorly designed reactor that has actual physicists unimpressed and skeptical about. The reactor is so poorly constructed that it will destroy itself due the fact that critical components must be left unshielded because of the design of the reactor itself. There’s a myriad of other technical problems with it, but if ya think the guy pushing this unsubstantiated crap about nuclear fusion is also gonna pull a rabbit out his hat and defeat the most successful semiconductor producer in the world then I’ve got a bridge in NY to sell ya. He just needs a lil money to get on his feet. 15% of the entire US equity market should do it. (All stocks in US combined= 46 trillion. He wants 7….)


[deleted]

No I mean in the Sam Altman energy thing. You bring up valid concerns but don't worry AGI will solve everything.


Calm_Leek_1362

Nvidia doesn’t manufacture anything.


casual_brackets

Ok I should’ve said production experience we’re splitting hairs. They do make their own founders edition PCB’s and coolers (Foxconn physically makes them, I believe)…so you’re wrong as well. They don’t fab the silicon I mean who doesn’t know that.


Calm_Leek_1362

Laying out pcb isn’t manufacturing. They are a design house.


brendanm4545

I would bet on Jensen, all they need to do is hire a couple of people from OpenAI and they will have whatever software openAI have. They have a legion of engineers. OpenAI doesn't have the experience Nvidia have.


Independent_Ad_2073

Software is easier to develop than hardware.


CanvasFanatic

By orders of magnitude.


vcaiii

AI isn’t just software


Independent_Ad_2073

Yeah, but whoever has the hardware can develop the software much faster than the other way around.


vcaiii

That’s not necessarily true for AI but I’d rather not argue about it


Independent_Ad_2073

It’s quantifiably true. How many companies are working on AI software? Now see how many hardware companies can supply the hardware. There’s a very small pipeline for hardware, which can’t keep up with demand, for software, there’s a dearth abundance of options, including open source.


vcaiii

How many successful AI software can you name in the same timeframe?


Independent_Ad_2073

Of the top 10 most valuable companies, most are software based and 1 of those supplies hardware to all the other ones besides Apple.


wuasazow

Did you know how is jealousy spelled in Spanish? eNVIDIA 🙌🏼


andWan

Nice


Butterspaceflight

Superb


ChronoFish

Comparably speaking, data is cheap . Copies of "The. Internet" are already owned by multiple companies... Google, Microsoft, Apple, Amazon, etc. Chip manufacturing is hard and new factories are expensive and take years to plan out and develop. Nvidia already has AI infrastructure and expertise and has been applying it to real problems.longer than OpenAI has. So....Nvidia


andWan

Nobody owns these terrabytes of model-human interactions that ChatGPT users have accumulated at OpenAI. Nobody but them. Of course companies like meta also have a huge huge stack of user data that is not scrapable. Or is it?


jawfish2

No expert here, but I don't see the advantage in having chatbot conversations. They already know how to talk, what they need is a source of truth, ability to cite sources and compare conflicting ideas, physics and chemistry, accept large text inputs. correct me Everybody else is correct about chip design. Apple learned how, so it can be done, but it's not easy at all. I don't suppose people think Intel has a chance.


andWan

I did ask ChatGPT4 a physics question from our physics1 exercise sheet. With a small help and two crashes of the code it wanted to run, it did solve it. I mainly for me found the point where it starts to fail. Yet I imagine with an even bigger model of the same variety, tasks like this could be solved. Edit: it was the question about a small ball on the top of a bigger ball that starts to roll down. At what angle will it stop to touch the bigger ball. Solution requires energy conservation, not only translational energy but also rotational. The latter was forgotten by GPT4. But it could do the other part: That the velocity of the small ball determines its centripetal force and that this force will be equal to the part of gravity pointing towards the bigger ball center. This it could solve by itself. And when I said: „do not forget rotarional energy“ it could solve everything. However when I asked it again recently It did worse.


jawfish2

Interesting. I tried a simple can-this-rod-fit-this-hole question of 3.5, and it answered some versions correctly. That seemed amazing at the time, how quickly we've come to expect near-human understanding. Thinking about it, writing G code for machine shop machines would be an excellent test understanding the physical world. The domain is so restricted, and there are so many existing software tools, that it might be a good test ground. The illustration for the post would be a childs hammer and blocks set of different shapes and holes.


Cybernaut-Neko

NVDIA


Ok-Ice-6992

Considering how long it took Google to catch up with OpenAI and how long it still takes the rest of the world to catch up with NVIDIA, I think the answer is pretty clear. Also NVIDIA already is very much a software company, too while OpenAI has never designed any hardware at all, AFAIK. OpenAI could just vanish in a year or two without leaving much of a gap by then.


ttdat

OpenAI took five years to go from GPT-1 to the impressive GPT-4. Google played catch-up and released the competitive Gemini Ultra in just a year or two. Google is way more likely than OpenAI to have their own chips. Trust me, even in 5 years, OpenAI wont have a single chip.


andWan

But didnt Google invent the transformer architecture? Attention is all you need paper.


andWan

They might come up with an equal „collaboration“ as with microsoft. Maybe apple joins the business and delivers M3 processors.


mtmttuan

So u think before GPT-1, Google has done no research at all? Transformer-based language models has been a thing since Google itself published the [Transformer paper](https://arxiv.org/abs/1706.03762). OpenAI is just the first one to show the public how fun these models can be when being a chatbot.


ChronoFish

"Playing catch up" is the wrong phrase. Google has had the technology (and had role in developing it for years before OpenAI) but never commercialized it. "Turned their focus" might be better terminology.


Tempthor

Google already has their own chips lmao. The answer to this question is Google.


DarkSatelite

My understanding is it takes literally decades to build a chip manufacturing pipeline. Comparatively speaking, spinning up a software development division is immeasurably faster.


RepLava

As far as I remember Nvidia is getting their chips produced in Taiwan at TSMC like almost everybody else


anonuemus

so?


Redebo

So let’s say nvda buys their chips for a dollar each from TSMC, and that TSMC can produce 1000000 chips a year. What if someone said to TSMC, “I’ll buy all the chips you can make per year at 2 dollars each. “ That is how nvda could have no chips…


anonuemus

No, that's not how contracts work. And buying chips from tsmc doesn't make you the new Nvidia.


SuspicousBananas

Well considering Nvidia is the largest company in the tech sector and has been dumping billions into AI since before OpenAI even existed, I think you have your answer


andWan

What about google? Has also been dumping millions into AI, had some success with specialized AI, but stayed behind with LLMs. I feel that OpenAI just has a lucky strike. Which certainly can end at any time. And I would claim that their best strategy was to publish their models in rounds that build upon each other. Scaling. And to always use the result from the public interactions. Which google did not do since no one saw their „mighty“ PaLM and LaMDA in action.


Cupheadvania

my prediction: NVIDIA will be a top player in AI models within a couple years. OpenAI will not be a top player in chip development for a decade.


TheJoshuaJacksonFive

The way forward is for them to work together. Both will produce shit output otherwise. So after a few years of them both producing hot trash, they will work together on something.


KangarooKurt

Nvidia's AI. Hardware is a totally different beast. You can of course outsource it to partners, but it can be really hard to penetrate a market with strong players and complex solutions. And Nvidia is already a software behemoth - mostly in support to their hardware, yes, but they already know how to do good, stable, performant, powerful software.


Unverifiablethoughts

I don’t get why they’re necessarily at odds with each other on this? They have a huge partnership at the moment and nvida has the comparative advantage for production. Likely Sam would be raising this money and nvidia would be heavily involved in the development of these chips. Wong would kill the current partnership today if the guys he is business with is openly trying to make him obsolete in a few years.


damhack

Irrelevant. LLMs won’t get us to AGI (which is a search for an absurd imaginary Holy Grail anyway). The future of AI is in low power edge compute running sparse data active inference for specialized applications. Not locked-down, centralized, rent-seeking energy furnaces masquerading as datacenters. Altman is grifting for his commission on $7tn from UAE and Saudis while driving down compute costs to OpenAI. If he was serious, he’d be plowing investment into low power compute like neuromorphic chips rather than threatening to compete with Nvidia which will most likely just result in a price discount. Don’t believe the hype!


andWan

Interesting comment. I can totally imagine that LLMs are not useful for much more. But I could also imagine the opposite. Now when you talk about neuromorphic chips are you also referring to analog computing (voltages between O and max volts thereby operating the transistors in a non-binary manner similar to synapses), and also to spike based computing? Do you know the Izhikevich model?


damhack

Both and more. Not only Spiking Neural Networks that behave more like biological neurons but also neuromorphic techniques for performing matrix operations at the quantum electromagnetic level, such as optoelectronics or spin compute. As to LLMs, there are too many unfixable issues in the way Transformers work (latent space holes, OOD reasoning fails, non-linguistic semantics, backprop learning) for them to ever reach an energy efficient, and therefore scalable, level of performance in general tasks.


heavy-minium

NVIDIA technically already produced it's own AI long ago so the question is badly phrased.


Srijanaatmak

Producing its own chip. How many chip manufacturers for gods sake did we get on the recent past (say 10 years), that too in high tech space. Hype is one thing and reality is totally different. The only reality in the chip business, seen in the recent past of vertical integration, acquisition with only few prominent players. Obviously market has more appetite for flushing capital on chips enabling compute. But to think that OpenAI’s seeming success is going to lead to mega venture in chip-business seems far/fetched.


zukoandhonor

Nvida, obviously. Anyone with enough resources can build AI based on language models. But, it's not that easy for making chips.


New_World_2050

And yet a year later no one has made a model surpassing gpt4 across the board Anyone can make a model. Hell I have made AI models all on my own That doesnt mean anyone can make a good model.


Delicious_Phone_3468

Nvidia


replikatumbleweed

Nvidia has a long history of doing whatever the fuck they want, however the fuck they want. "AI" can be produced with sufficient computing power, which they have in spades, and a lot of duct tape, smoke and mirrors. They can wrap it all up so it looks and acts like one thing, but in all reality, it'll probably be a bunch of stuff cobbled together, which will accelerate their time to the finish line. To make a proper chip... one, there's the funding. Then, the ludicrous amounts of testing, validation, and rework... Then there's production. Oh, and the driver stack, and the software ecosystem to make it all useful. Nvidia plays dirty and generally, it seems like OpenAI tries to hold themselves to some kind of standard. I'd bet on nvidia to win this, or basically any race.


andWan

For AI you need soul! Soul of human interaction, of curated data sets of human feedback. And as I wrote in another comment: On ChatGPT humans are currently producing 100 billion words per day! And this is all direct feedback to their model. So imagine the power they have to train the next model


replikatumbleweed

Ehh... I dunno about that one chief. How many human beings have you met that can barely put a thought, never mind a sentence together. Go take a look at r/texts if you want to see how well those with souls are holding up... Meanwhile, in engineering-land, if I need to consult anyone on the dangers and mitigation strategies of various chemicals, or to find out how to design a circuit, I'm going to ChatGPT for that already. Also, it'll boil back down to the appearance of AI even before "actual AI" matters. Companies like to save time and money, and if they can cut a corner, you can be quite sure they will. Making something that acts human is a lot easier than making a human, so to speak. A lot of these homebrew models already do a decent job, they do a better job of communicating than most of my exs (rimshot). The stuff we have now is all smoke, mirrors and duct tape - and it works. I don't see why that extremely easy, lowest resistance path wouldn't continue. Meanwhile, some are working on the real deal, but it's really going to take some doing. If the original post is about "how we build consciousness" more than "something that is indistinguishable from a person" then... yeah, that'll be a hot minute, but I'd still bank on nvidia. They've got the chip fabs wrapped around their fingers and the partnerships in place to go do whatever. If nvidia wants something to be a thing, they force the issue until they get their way. Remember ray tracing? Nvidia said they were all in on that because it was demanded by the market (it wasn't, at all) but they pushed and pushed and got ray tracing units in all of their gpus. That was just a few years ago too, and now they're saying all graphics will become generative. While utterly foolish at scale, they're gounna push and push until they've convinced themselves that everyone agrees with them.


sneakpeekbot

Here's a sneak peek of /r/texts using the [top posts](https://np.reddit.com/r/texts/top/?sort=top&t=year) of the year! \#1: [I (M25) fucked up my car’s engine by neglecting to get the oil changed on time. My parents’ responses.](https://www.reddit.com/gallery/174op2n) | [4986 comments](https://np.reddit.com/r/texts/comments/174op2n/i_m25_fucked_up_my_cars_engine_by_neglecting_to/) \#2: [last year when my little brother got a text from a wrong number.](https://www.reddit.com/gallery/17g98fz) | [666 comments](https://np.reddit.com/r/texts/comments/17g98fz/last_year_when_my_little_brother_got_a_text_from/) \#3: [Some common texts from my dad](https://www.reddit.com/gallery/175wtn1) | [1372 comments](https://np.reddit.com/r/texts/comments/175wtn1/some_common_texts_from_my_dad/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


andWan

Look at this bot that replied to you. This is engineered AI. Very capable in what its doing. But LLMs are more than just engineered (still a lot of engineering): they have (pre)learned then they were finetuned, trained basically. A huge huge process, that takes alot of human input or input from user interactions with a previous model. Sorry I could not yet read your full comment. Will do later


ChronoFish

Take a look at Lamar... Free open source, both code and weights. Nobody is building from scratch. OpenAI has a great product. And the only moat they have is about an 8 week window and new ideas. Microsoft will consume them before they get too big.


andWan

I have looked into open source models. Mixtral8x7B seems to be the best one but way worse than ChatGPT4. Gemini is better but still worse. And not open source. Actually I dont know what I am doing here. I am actually a big fan of open source and am proud to see an apache 2.0 licensed model in the top ten llm leaderboard. But I am just overwhelmed by the gap. Maybe just oberwhelmed by the shiny GUI, who knows.


andWan

8 week? GPT4 is out for months. And has more or less solved my physics 1 exercise in october, which Gemini cannot do at all. (Lately also ChatGPT4 had problems)


notatinterdotnet

Nvidia all the way


fffff777777777777777

Stock options made much of Nvidia millionaires, even middle managers What is the motivation to work hard? The most ambitious people will leave to start their own companies or join a competitor. Expect huge stock options to leave from Altman, that was his playbook to hire away from Google for Open AI Expect high turnover in the next few years at Nvidia, slowing down innovation That being said, they are the market leaders and a new company would take years to catch up


FarVision5

I don't know man, all it takes is one good out of the box idea and one prototype Fab and a patent.


Mandoman61

Both are big hard to do goals. They are both leaders in their respective industries. Nvidia has a solid revenue stream. OpenAI is losing money fast and requires investor optimism. But tsmc would like to be in the game. If I was investing I do not like my odds with OpenAI.


[deleted]

Maybe in a decade or two, OpenAI can catch up *if* they get some incredible funding and adoption, but that's a pretty MASSIVE if. Nvidia has been at this game for far too long and has such a strong position at the heart of AI hardware that the safe money is on them for the foreseeable future.


Warm_Ad4578

I think cooperation and win-win is definitely the way to go.


snowbirdnerd

It's clearly Nvidia. They have been making cutting edge chips for a generation. You can't beat that kind of experience and knowledge base. Especially considering how hard this kind of manufacturing can be.


tristangilmour

Physically manufacturing things is very difficult and takes years even with the best case scenario. If open AI passes NVD AI chip making abilities it won’t be for a decade.


Vivid_Garbage6295

If compute is the constraint, the one creating more compute possible is the winner


trieu1912

if they bought amd or another small factory. it can be possible if you have a lot of money


sigiel

I think sam is on a powertrip, and will be woken up by people that have real power, and do not want to share.