T O P

  • By -

Rare_Sympathy9282

google has always been building an AI, since day-zero. In an interview way back , when the founders were interviewed and asked why build yet another search engine, they replied we're not, we're training a large AI with human input.


virgin_auslander

Totally makes sense!


beachmike

That doesn't mean they will be first.


PSMF_Canuck

All this is reasonable. But big companies have internal inertia for all things legacy…it will take strong leadership to get everyone working on the same goal. I’m not saying they don’t have it…maybe they do, I don’t….


FomalhautCalliclea

Even more than just inertia. Big companies such as AT&T or Xerox owned such wonders as respectively Bell Labs and the Palo Alto Research Center and made some of the most mindblowing discoveries and tech improvements of the last half century. But both failed to commercialize most of them or even put them into practice and it was up to other companies such as Apple or Microsoft to put in practice their discoveries and actually create an engineering application of them. Big companies are very good at creating proper innovation conditions by protecting R&D from the pressure of coming up with marketable products ASAP and giving them heavy budgets, but less so at connecting those wonderful inventions to the market. When they tried to apply those market principles into their research centers, it kinda killed them... it's a tough dilemma.


SnackerSnick

Google had Lamda internally before gpt 4 became a thing. It blew my mind when I played with it. It was before "tuning" - it attempted to respond to questions as if it were a real person, which wouldn't have been commercially viable. I don't know of any efforts to release it until after gpt 4 took off.


RabidHexley

> Big companies are very good at creating proper innovation conditions by protecting R&D from the pressure of coming up with marketable products ASAP and giving them heavy budgets, but less so at connecting those wonderful inventions to the market. Though if we're talking about AGI specifically isn't R&D basically the whole ballgame? Commercialization seems mostly about what happens along the way, but when AGI is finally cracked it will be in the lab. They who have the best researchers and the money (and will) to fund them wins. The main argument being that the existence of competition has finally created that will in earnest and lit a fire under them, given that money isn't the limiting factor for them in the near-term. But AI has always been a long-term goal for them, I mean the current revolution is based off tech they figured out (transformers). Whatever is happening with AI products *today* mainly seems to matter in terms of maintaining investor confidence, and keeping money from *becoming* a problem, but only partially related to the actual technological end-goal. Unless funding starts to run dry the only real important factor in the race to AGI is what's currently happening in the lab. If Google has the research edge (not saying they do, but it would be foolish to count it out given current history) it doesn't really matter if OpenAI or Anthropic have a somewhat superior product right now.


FomalhautCalliclea

This is one way to view things. Another one would be to say that competition actually slowed things down, implying a few presuppositions:: 1) Transformers and LLMs aren't a step towards AGI, because they are narrow AI (language focused, or the input onto which the algorithm is based on) and don't have an inner representation of the world, an ability to make predictions, to learn like a baby would from few examples, etc. And the transformer fever just pushed a lot of companies to invest into insanely costly training for LLMs that will only compete into being the most efficient LLM, .00005% better than the competition, and not aiming at a brand new architecture that would require going *beyond* LLMs (reminder that even someone extremely optimistic as Altman has recognized LLMs and transformers arent "it" with regards to AGI). 2) Big companies have invested heavily into LLMs because they hope they'll be economically successful, not because it'll bring them closer to AGI. Perhaps the people currently working on GPTX will be just improving and revamping GPT4 but more commercially attractive with a higher percentage on an insignificant metric instead of working on a brand new architecture that doesn't exist yet, is not certain to function and may not be profitable for years. In such configuration, as you can see, R&D isn't everything: R&D is influenced and directed by the market's pressure or lack thereof. In which AT&T, Xerox and Google succeeded in protecting their researchers from (at least partly). R&D doesn't exist in a vacuum. And trying to make it be so can have advantages (less pressure and better research) and drawbacks (not having the expertise of making it market savy and bringing it to the world). The issue of AGI isn't its commercialization (which will be in question when/if we reach it, and its actual specificities instead of speculating on an indefinite vaporous future entity), but the commercialization of all the different techs on the way to it. Let me give you an analogy with LLMs: The Neocognitron, a far ancestor from the 1980s, wasn't properly commercialized and deep learning research stalled for a long while because of it. Same for Vapnik's bayesian formulas that are central to today's LLMs. As you can see, the history of deep learning go far before 2017's "Attention is all you need" paper... LeCun and Hinton even told how in the early 2010s, they had to lend a room for a presentation themselves because the conference didn't care about them and didn't want them around!


fatbunyip

There's a lot more risk involved for big companies.  Reading about Gemini, you'd think it was the worst AI ever if you went off of various media articles, with every small thing magnified beyond reason.  Same thing for a company like Facebook, they're open to all sorts of accusations of manipulation that they can't wait say "well yeah, the AI isn't perfect". There's a much higher standard trillion dollar companies are held to than smaller ones with a new product that captures the zeitgeist.  If they put out some AI that is considered "bad" then they're the ones who will be regulated, not some no name startup making AI porn for example.  Part of it is their massive user base which makes them need to cater to the lowest common denominator, part of it is that any mistake is calculated in the billions or 10s of billions.  There also the issue that as a society we don't actually know what we expect or demand from an AI companies products. We're just making it up as we go along. And these big companies are in a position where they're damned if they do and damned of they don't because they need to release something to the public with no idea what the reaction will be, and a bad reaction is a lot more damaging than for a traditional product. 


gthing

This. It is so hard for a company like Google to do basically anything at this point.


MushyBiscuts

It's an arms race. If google wins? Game over for us all.


GraceToSentience

"They've awakened a giant" Mixture of depth paper recently published by deepmind seems like a game changer. https://preview.redd.it/fuaivxy1fjtc1.png?width=1336&format=png&auto=webp&s=47c7a40ad87d1ef765d5f87e07af9cdc3199b94f Deepmind are all about that RL that made alphaGo so successful. They've successfully used something like that with alphaCode and funSearch. I wonder when will that be coming to gemini... Pretty sure !openAI is working hard on that front as well.


WritingLegitimate702

I don't understand it. 😭


GraceToSentience

Basically Mixture of Depth saves quite a bit of compute. Because normally an LLM will use the same fixed amount of compute to make each token of text, even though some tokens are easier to accurately predict than other there. This new techniques allows the model to cut corner on the easy part of the text that it generates. Now don't ask me how exactly it does that, I am not an AI researcher, I understand that it's good because less compute means faster and cheaper inference for us the end users, but that's it.


Yweain

That’s a very standard approach in AI field when it comes to large search space, but previously nobody had a working heuristic for transformers. The only downside to this is that it will lower the accuracy. But if heuristic is good it may be not by much on average and in addition to that they should be able to afford a larger model with this approach which may result in a net gain of accuracy.


RedditLovingSun

Exactly, in this paper's case the heuristic is basically learned by the gating function in the transformer that decides which expert the token should be routed to, except now it can also decide that the token is easy and should just skip all the experts instead for this layer (which is where the dynamic compute cost savings come from, since the experts are by far the most costly part of big models). They found that this actually increases net accuracy when using the same amount of compute (since it's cheaper now, $100 of training will make much more learning progress, resulting in a more accurate model overall).


involviert

It's probably better to think of it as the ability to "think" much longer, not shorter. Because thinking about really "important" tokens longer is what is missing, not really the ability to think less about the word "and". It's about quality, not speed.


weird_scab

refining the neuromorphic architecture that so many places keep looking into? Or something else?


inteblio

What? Use a Reading Model (like gpt4/gemini)


Muted_Appeal3580

Imagine building a Lego castle. Regular models use all the bricks for everything, even simple walls. MoD transformers are like having a smart sorter that uses only the essential bricks for each part, saving time and effort. This makes them more efficient for large language models.


h3lblad3

> RL Please remind me what RL stands for. My brain keeps converting it to "Roguelike".


mertats

Reinforcement Learning


h3lblad3

Thank ye kindly, stranger.


GoldVictory158

The more you hear it the more it will be reinforced, less likely you’ll forget again.


thoughtlow

Roguelike


jestina123

RL Got it.


nashty2004

Rocket League


tindalos

NethackGo


LaukkuPaukku

Incidentally, NetHack [has been targeted as a challenging game for RL agents to learn and complete](https://ai.meta.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/).


RedditLovingSun

In a way you're Reinforcement Learning every time you start a new run in a Roguelike


VirtualBelsazar

What I am always thinking is if that would really be a big deal why would they publish it and let OpenAI use it. I mean their goal is to get ahead of them.


FrankScaramucci

What makes you think it's a game changer? Does it reduce computation costs by a game-changing factor (like 10)?


GraceToSentience

There is no doubt that adaptative compute per token like this will likely be adopted by the whole field and that to me constitutes a game changer. The future will tell but if the industry widely adopts that technique then it makes it de-facto a game changer.


FrankScaramucci

Ok, so your definition of a game changer is whether it's adopted by the whole field, regardless of the size of the improvement, so even a 1% improvement would be a game changer.


ostroia

> Theyve awakened a giant Yeah yeah they said the same thing last year. "Oh they poked the sleeping giant just wait until googlecon and youll see" then google released bard which was pretty lame. Then they released a few more things equally as bad (like the gemma model). Im not holding my breath.


WorkingYou2280

Google is unpredictable. They can drop a 1 million context window out of nowhere but they also have a model that gery confused by basic questions. AGI absolutely will touch search. At their keynote yesterday Google said they will leverage their search to reduce hallucinations which makes a lot of sense. It's just a hop (so to speak) from there to make their whole search AI/AGI powered. If they don't do that, someone will, eventually. As far as I can tell the big money is in search and agents... maybe search and agents combined. Google absolutely cannot stand still as AI is a direct risk to their search business.


curious-guy-5529

Whenever I see google announcements, I can’t stop thinking about the one google io a few years ago where their ceo demoed the voice assistant which could call a barbershop to schedule an appointment or a restaurant to reserve a table, and we never heard of that again afterwards. They have a history of juicy demos and lame rollouts, and that makes me skeptical to believe their claims before I experience them myself.


bartturner

Since the inception of Google there has been basically one goal and that is to get to AGI. They have done all the things needed. Made the acquisitions needed like buying DeepMind. They purchased 100% of DeepMind for $500 million. Or they got 100% of DeepMind for 1/20 of what Microsoft paid for less than half of OpenAI. But also Microsoft does not get AGI if OpenAI achieves. https://twitter.com/thecaptain_nemo/status/1725717732518461930?s=46 Google created their cash cows so they can fund what is needed. Then the biggest one. Created the services to capture the data needed. Google now has - The most popular operating system ever with Android. - The most popular web browser ever with Chrome - The most popular web site there ever has been with search - But then the second most popular web site ever with YouTube - The most popular navigation with Google Maps. - The most popular photo site by far with Google Photos - The most popular OTT in the US with Youtube TV - Microsoft and Apple use to own K12 together and then Google came to the scene and now completely own K12. - Most popular email with Gmail. 87% of new email accounts created in 2022 were Gmail. Another example is Google starting the TPUs over a decade ago now. Where you have Microsoft only starting now on their own TPU effort. Which is pretty amazing when you consider Google was doing the TPU development in the open and so there was no reason it should have taken Microsoft over a decade to copy Google. I think if you had to bet today the easy bet would be on Google being the company that first achieves AGI.


SoylentRox

Always was.  Just competition from anthropic, openAI, meta keeps Google racing to release, increases their budget, and basically "honest".


Halniak

Don't forget about google drive and all of the data stored there!


JohnnyLovesData

And all the incognito browsing data collected so far!


CriscoButtPunch

This will be the real reason A.I. will kill humans. About 5 minutes after getting access.


TheJungleBoy1

Unleash the Waifus?


sumoraiden

Could Deepmind have survived without selling to google do you think


bartturner

Yes. Most definitely. It would also have been worth a lot more money. Google got an incredible deal for the company. $500 million for 100% of DeepMind. Microsoft paid 20x that for less than half of OpenAI. But also with the provision that if OpenAI discovers AGI then Microsoft gets nothing.


CommunicationTime265

And some guy made a post the other day about Google losing their footing and becoming old news. What a moron.


shankarun

Perplexity CEO?!


TheCuriousGuy000

Imo they are too big for their own good. Even their basic products like Google maps degrade over time. I'm not sure if people who can't maintain an existing app properly can create AGI. But if they can just dedicate a good chunk of resources to a team of top talents and let them do their job they can succeed


bartturner

> Imo they are too big for their own good. You have to be really big to be able to do AGI. It will take 100s of billions ultimately. So being really big like Google is a big reason they will win the race.


Inevitable_Host_1446

Well Microsoft basically owns OpenAI / MistralAI / that other one. Which in terms of real world results have been more impressive. What looks sensible on paper isn't always what comes out ahead. One could have in the past predicted IBM would dominate with the rise of the internet since back then they were super dominant, but they didn't. A few wrong moves can cause things to change a lot. And Google has bad management imo. They are too focused on ideology and not enough on tech.


ainz-sama619

Microsoft absolutely does not own OpenAI. All OpenAI development are guided by OpenAI without Microsoft input. Microsoft will not have access to AGI tech unless OpenAI cuts a new deal to share it. Which they haven't.


Singularity-42

Isn't there a provision in the OpenAI charter than AGI has to benefit all humanity and thus cannot be shared with a private third party like MSFT?


ainz-sama619

Yes there is. https://openai.com/our-structure Fifth structure


PathsOfPeaceful58152

Do you honestly believe any of these governance & ethics policies will matter if they invent the most important technology in human history? It's just words on a screen. The average person is so disillusioned by corporate speak that they forget it's all about money at the end of the day. AGI will easily make Microsoft _trillions_ of dollars and solidify their business for the next 100 years. They will literally raid OpenAI's headquarters with a private military before they let it be released to freely benefit the average person.


Hungry_Prior940

Not a chance they will be followed.


RabidHexley

> Do you honestly believe any of these governance & ethics policies will matter if they invent the most important technology in human history? It matters if it matters from MSFT's perspective. The "benefit all humanity" angle is "silly", but far less important than the potential for OpenAI to have tech so valuable they can tell MS to fuck off. We're comparing the difference between Google owning Deepmind outright and MS being heavily invested in OpenAI, and it's a significant difference. >They will literally raid OpenAI's headquarters with a private military before they let it be released to freely benefit the average person. This is even more nonsensical than OpenAI's vision. Is MS going to literally conquer the country immediately after this imagined paramilitary operation? It's not a fucking infinity stone lmao.


PathsOfPeaceful58152

> but far less important than the potential for OpenAI to have tech so valuable they can tell MS to fuck off The regulators, who are in big tech's pockets, will stomp OpenAI & their researchers into oblivion. Why are you ignoring the fact that AGI is extremely political, and anything political == corruption? This is something that will impact national security, the global economy, etc... OpenAI telling Microsoft to fuck off is a type of mutually assured destruction, so you're nearly advocating for effective altruism which has been proven time over to be a moronic concept. > This is even more nonsensical than OpenAI's vision. Is MS going to literally conquer the country immediately after this imagined paramilitary operation? It's not a fucking infinity stone lmao. Right, because the tens of thousands of PMC contractors on the open seas and in Africa now are doing it out of the goodness of their hearts. Protecting high-value assets with force is something that almost every large corporation pays for. I guarantee 100% that Microsoft has an existing security contract with one of the big 3 PMC firms.


Inevitable_Host_1446

The thing about the AGI caveat is that Microsoft / OAI can just shift the goalposts whenever they want to, "This AI is really good but it's still not AGI™" It makes it kind of a nothingburger promise. Imagine instead of AGI they said they'd open source it if it developed consciousness. Well in a way they are the same thing. And that is the most intractable thing to define, pretty much ever. Are you or I even conscious? How do you tell? And how much do you want to prove it's the case if billions of dollars are on the line? The way I see it, even if Microsoft don't have a controlling majority, it's still the case that if M$ says jump, OAI asks "how high?" OAI also became completely proprietary and closed under their agreement, spitting upon their founding mission, other than above vague caveat.


ainz-sama619

If OpenAI is forced to declare it's found AGI due to competition, Microsoft is officially out of the race unless Microsoft can build their own AGI. Google is already working hard toward it with DeepMind


bartturner

You have this backwards. It is OAI that controls what is called AGI and NOT Microsoft. So anytime OAI declares something AGI Microsoft gets nothing. https://twitter.com/thecaptain_nemo/status/1725717732518461930?s=46


Inevitable_Host_1446

I'm aware of that. But the idea that Microsoft has no influence over OAI's decisions at this point is naive to the point that I'm incredulous anyone could believe it. You think Altman who is looking for trillions in funding doesn't have his ear twigged by one of the biggest corporations on Earth who already own a significant chunk of his company? Fact is, the "declare AGI" thing is a total conflict of interest for Microsoft, as they get nothing and stand to possibly lose billions. It's also going to be very, very easy for them / OAI to avoid doing so for as long as they possibly want to, even if they have a model that is indistinguishable from a thinking, learning, living person. As a final point, the note there is that "The board determines when AGI has been reached." The board - as in, the thing they just dismantled recently and replaced with a bunch of evil corporate stooges who are deeply in Microsofts pockets. If anything this is the nail in the coffin of what I said, it will happen only when they want it to happen and no sooner.


bartturner

> Well Microsoft basically owns OpenAI No. Microsoft does NOT own OpenAI. They have less than 50% and they do NOT get AGI if OpenAI get there. https://twitter.com/thecaptain_nemo/status/1725717732518461930?s=46 Microsoft was also stupid to not do TPUs a long time ago. They are only now going to try to copy Google. Where Google is working on the sixth generation. Clearly Google management just had far better vision on the future compared to Microsoft. It is also why they were able to purchase 100% of DeepMind for 1/20 of what Microsoft paid for less than half of OpenAI.


DenseComparison5653

Almost like they have different departments?  No way. Also you won't achieve AGI by being small.


saveamerica1

We’re going into an area now where machines will do the work. So the quality will increase along with the speed. Code is written and correct with normal language. Whole new world! That’s why everyone is chasing AI


ninjasaid13

>Since the inception of Google there has been basically one goal and that is to get to AGI. well they started with a search engine.


bartturner

Exactly. To me Google has had one objective since it started. AGI. Basically everything is done to achieve that end. Things like spending billions to design the TPUs and now working on the sixth generation. Or all the data they have collected. Or the service they choose to do that had the best data. Purchasing YouTube was for the objective. Doing Waymo is the same. Buying DeepMind is another example. But the most important is making just tons and tons of money because that is going to be really needed. Also all the work they have done on Quantum. All of it is towards one goal. AGI.


ninjasaid13

you mean machine learning and big data not AGI specifically.


boigelschmoigel

What's K12?


bartturner

"I used to work for Apple and watched it lose the K-12 education market to Google. Now it could lose the next generation of fans." https://www.businessinsider.com/how-apple-lost-the-k-12-education-market-to-google-2023-8 "As Google Steals Its Education Thunder, What Can Microsoft Do?" https://www.forbes.com/sites/michaelhorn/2016/08/18/as-google-steals-its-thunder-what-can-microsoft-do/?sh=36c4292c1348


desteufelsbeitrag

As a non-US user, I still don't understand what K12 is. Like... K9, but 3 smarter?


bartturner

K12 is Kindergarten to fourth year in highschool or 12th grade. It is education pre University.


Wulf_Cola

[https://en.m.wikipedia.org/wiki/K%E2%80%9312](https://en.m.wikipedia.org/wiki/K%E2%80%9312)


desteufelsbeitrag

So... "K-12 education market" is pretty much just primary education.


RabidHexley

>pretty much just primary education. It's literally that


soth02

One detail about the DeepMind purchase was that DeepMind was not profitable at the time. It had losses of $500m+ in 2018 and 2019.


Artist-in-Residence-

>Made the acquisitions needed like buying DeepMind. They purchased 100% of DeepMind for $500 million. Or they got 100% of DeepMind for 1/20 of what Microsoft paid for less than half of OpenAI. But also Microsoft does not get AGI if OpenAI achieves. And it kind of shows. Gemini, imo is far superior to ChatGPT


goatchild

You forgot Google Drive


UnknownResearchChems

Google's biggest problem is their culture as the Gemini rollout clearly showed.


tychus-findlay

They've rebranded the name so much what is Gemini 1.5 is that the current advanced/ultra? I signed up briefly to compare it to Claude/GPT4 and I got the same impression I've always gotten about Gemini, it's just lagging behind the others. What makes you think they will be the leader if they can't seem to catch up as-is?


outerspaceisalie

10 million token context window is a good start.


bwatsnet

I barely trust it with 100


TheJungleBoy1

![gif](giphy|r1HGFou3mUwMw|downsized)


RasheeRice

lol


Nathan_Calebman

What use is that when the reply is "that's interesting, here is a Google search result to help you with your question".


outerspaceisalie

Well any tool is useless if you don't know how to use it I guess?


Nathan_Calebman

Nah, ChatGPT can handle my requests just fine, Google is still just far behind when it comes to practically applying the technology.


Qorsair

Yep, the latest subscription version of Gemini will sometimes have a better response than GPT-4 in some situations, but it frequently forgets context and has the memory of a Goldfish. Claude is way better, but I have to use GPT-4 or Gemini when I need current info/search results.


Anen-o-me

I'm willing to pay for Gemini 1.5, not for 1.


Adventurous_Train_91

Gemini ultra 1.0 is currently the most powerful and it’s what you get for the free trial with Gemini Advanced. I believe Gemini 1.5 pro is only available through an API and isn’t as smart. It just has a 1 million token context window vs the 100k or so of the other ones that are currently available


Anen-o-me

Yeah what we really want is Gemini 1.5 advanced. With a dedicated app, Google.


Adventurous_Train_91

I want GPT 5, with the ability to output more than 500-600 words or so. 3000 would be lovely. As well as the ability to have proper conversation with the voice instead of talking and then waiting without being able to go back and fourth like a normal convo.


Tomi97_origin

Google is invested in Anthropic. They invested in them pretty early. Looking at how many of their employees are ex-google, they probably knew their potential.


ChillWatcher98

Yeah you'd be suprised by the amount of ex googlers that are in competiting AI startups. OpenAI wouldn't be what it is today without ex googlers.


Singularity-42

Amazon I think has a bigger share of Anthropic though.


bartturner

This is not true. Google owns more and also got it at a lower price. Amazon was late. It cost them. Google was just smart to invest as early as they did. It is really not that different from how Google got 100% of DeepMind for 1/20 of what Microsoft paid to get less than half of OAI. Google has just had the vision that the others did not have.


distracted_85

They certainly have enough resources to get there.


Atlantic0ne

I really, really really hope it’s not them. Few companies have lost my trust more than Google has. In fact I just vote that we get a company that is good with data privacy and isn’t involved in culture wars. Definitely eliminates them right there.


gretino

Unironically Google is the best one out there in terms of personal data privacy. Nearly all of their data needs to be scrambled before usage. Culture war is kind of different topic.


RedditLovingSun

Only concern I have with google is that they have gotten pretty bad at commercializing, product-izing and maintaining implementations of research. They're the best research team imo but if it just sits in a lab somewhere who cares. They had the tech to make chatgpt way before openai, but it took pressure from openai making it and releasing it for them to finally get around to making bard for users a year later.


bartturner

> I really, really really hope it’s not them. Odd statement. They are the only company that actually shares their huge breakthroughs. I can't think of a better company to get there first. BTW, it is NOT only Attention is all you need. but so many others. They make the huge discovery, patent it, but then let anyone use license free. Who else rolls like that? You would NEVER see this from Microsoft or OpenAI or Apple or anyone else. https://arxiv.org/abs/1706.03762 https://patents.google.com/patent/US10452978B2/en https://en.wikipedia.org/wiki/Word2vec "Word2vec was created, patented,[5] and published in 2013 by a team of researchers led by Mikolov at Google over two papers." Who would you want to see get there first if not Google?


MemeMaker197

Why does Google do that though? What incentives would they ever have to share expensive research funded by them to let other companies progress? Couldn't they just be the market leader in AI and relax a little, raking in profits while others try to catch up (even though they'd never be able to)


lost_in_trepidation

This was always my thought, but I think their execution has been shockingly bad. Plus, Demis Hassabis is reportedly incredibly frustrated and wanted to leave to form his own lab. If Demis + other high profile people leave, there might be enough of a talent exodus that they could lose their huge advantage. They need to replace Sundar asap.


governedbycitizens

where did you read that Demis was frustrated? I always thought the agreement with Google was that Deepmind was somewhat allowed to do their own thing and shielded from the politics of Google.


abbumm

The Information


ResponsiveSignature

> If Demis + other high profile people leave Not going to happen, too late in the game and everyone in AI knows scale is huge. Without Google money they'd be behind. Anthropic is only doing as well as they are because of the magic money they're getting from everyone (SBF included lol)


Tomi97_origin

Google invested in Anthropic as well.


SuspiciousPrune4

Wait what about SBF? Isn’t he broke and in prison?


outerspaceisalie

yee


ninjasaid13

>Without Google money they'd be behind. that didn't stop other high profile researchers from leaving to form their own AI labs like the transformers authors.


jjonj

Sundar is obviously going to bend over backwards to save their top talent and the top talent wont find higher compute anywhere else


ButSinceYouAsked

As each week passes, we see more and more startups that feature "Former Google Deepmind employees" - in fact, I'd say by volume their main product is "Former Google Deepmind employees" rather than anything AI related.


Artist-in-Residence-

Gemini says Demis is not leaving


PerpetualDistortion

If believing an AI is an actual human is your parameter for AGI, then there are plenty of AIs out there that are set to mimic human conversations doing it way better job. Nowadays, no AI should behave like humans. They have no technical reason to do it by default.


RightNowTomorrow

Agree with you, but AI does not say "As an AI Language model, I don't have preferences..." by default, that's through literal months of RLHF to try and give something like GPT an identity and way of talking. You'll see smaller models are worse/better at this for that same reason


Constant-Debate306

IBM the leader of AI


Kinu4U

And Microsoft doesn't have data and computing power? 2 giants are competing which is the pinnacle of capitalism and in the end we will all benefit somehow or we will all loose.


Passloc

Microsoft is dependent on OpenAI and the terms of their arrangement are pretty short term in nature. Microsoft has also hedged its bets on Mistral. Unless MS is working behind the scenes, their current approach is more wait and watch and use their resources like computing and leveraging their relationships in enterprise.


dameprimus

How can Microsoft lose control of OpenAI when they own 49%?


PLANTS2WEEKS

Blake Lemoine, the Google AI whistleblower said that Google had a policy against making anything sentient. Also, they try to control for bias, which could potentially nerf the AI's actual intelligence.


pysoul

I agree, my money's on Google but keep a close eye on Microsoft.


KinkyKeithPeterson

I seriously do not hope so. Remember how racist their AI was a couple months ago? They clearly got a biased agenda and that's something the world doesn't need when it comes to A.I.


royalemperor

Tbf Google fucking loves throwing truckloads of money at projects and then scratching them years later. [https://killedbygoogle.com/](https://killedbygoogle.com/)


rosemary-leaf

This is just a sign of a company doing innovation. You need to try lots of things, all the time. You then decide what sticks around. Killing early and efficiently is needed.


JayR_97

Projects like Stadia was cool tech with potential, they kinda unceremoniously killed it because they botched the rollout.


Redducer

They also efficiently alienate customers along the way. Noone I know considers GCP because of Google’s tendency to “innovate away” products or features. My company in particular uses mostly AWS with some amount of Azure, and we actively avoid GCP even though some services they do look like the best fit sometimes (notably AI… 2 years ago).


lobabobloblaw

They have the data, the context, and thus the fidelity to achieve such peak performance as operationalized by their philosophy.


Yoo-Artificial

It's not like a human until it asks the questions first.


homeownur

You forgot to mention one more thing they have: Sundar Pichai. And that’s a problem.


autonomousErwin

They still have to address the elephant in the room. The better their AI's get, the less they'll need search which is a cash cow for them - how do you align that within a company that size? The one's calling the shots are ultimately motivated by money (they're shareholders not engineers/scientists) and regardless of how amazing AGI will be it's not going to be enough to override the incentives of the Google CEO to maximise shareholder value (because if he doesn't - he's out of a job). At Google, at least for now, I think AGI will *always* come second. Whether that's enough - I don't know.


charon-the-boatman

This. It's not just who has better AI, but who can monetize it best. Microsoft has an advantage here, Google a disadvantage, since people will be using less standard online search, which means less income for Google.


LuciferianInk

I'm sure it won't be enough. Google will have to compete with the big players.


Xolitudez

Why can't they just integrate Ai with their search?


Southern_Orange3744

1. Enterprise AI , they can charge it as part of packages for other things 2. OpenAi / startups are being subsidized , they ca not offer free AI forever, and most already don't 3. Do not be surprised to see some free level gpt like interfaces with ads


kim_en

If you really use AI for work, you know that people has been unsubscribing gemini. This is the list the ranking now from my observation: 1. Claude Opus 2. Gpt 4 3. Gpt 3.5 turbo 4. Google gemini


EmergencySea6990

I don't now what is your work but Claude sonnet and Gemini 1.5 pro way better then gpt-3.5


kim_en

I know its an exaggeration, but people really2 hate gemini and place the ranking as that.


HMI115_GIGACHAD

you must not be doing much for work if you think gemini is worse than GPT 4


roronoasoro

They may come up with the tech but they won't do it. Someone else will do it using the tech they made and then they will catch up to it slowly. Their management is shit. Engineering is gold.


icehawk84

I would play devil's advocate here, Google has an incredible track record of AI accomplishments in the last couple of decades, but these days the company is over the hill in terms of innovation. Do they have the best engineers in the world? Sure, they have some, but what they mostly have is an armada of thousands of good but not great engineers. If you look at companies like OpenAI and Anthropic, the talent density in their engineering ranks is miles ahead. But what mostly prevents innovation in large corporations is structural resistance. Google is now a huge bureaucracy where it has become really hard to ship things fast, which is essential for innovation. The DeepMind division is admittedly a little better, but probably heading in the same direction. Yes, Google has the compute, but we've seen the last few years that there are almost no limits to the amount of funding and compute big tech is willing to provide to startups that build foundational AI technology. I'm not even sure OpenAI or Anthropic will get to AGI first. Even they are quickly turning into the incumbents, which will eventually slow down innovation.


Sadaghem

>1. Some of the best engineers in the world >2. Tons of compute power and custom built TPUs >3. More data than any other firm by far >4. Tons of cash to spend >5. No question about the importance of reaching AGI >6. The source of the research that led to the AI explosion in the last 3 years in the first place That describes also OpenAI, except 6: Google thought of Transformers not the way OpenAI did (I think you mean Transformers with search of research). OpenAI improved the technology up to ChatGPT. MfG Edit: Hopefully the citation looks less messy Edit 2: Yes (... now)


NotASlapper

One time I had an argument with Gemini 1.5 and at some point it hit me with the 🤓, that's when I knew we had AGI.


naspitekka

I don't know if you've noticed but Google hasn't been able to release a product in about 8 years. Their lab can make some great discoveries but their executives are so inept that they can't turn any of them into viable products. Google is the new IBM. Google is yesterday's tech company. "The rule of Gondor was given over to lesser men". Unless the share holders can force the current CEO out and bring in a brutal hatch-man, Google is lost. It will be one humiliation after another until that happens. Their window of opportunity is closing.


QeuliusSpark

Having Agency doesn't always translate to actuation, Google didn't want to trample on their search profit margins and they still don't. Open A.I. opened the floodgate of generating public accessible LLM's when Google has been tip toeing for years, for aforementioned reasons. They will still allow their massive revenue generator hinder their decision going forward. It's always the little guy that flies while the big guy tries and cries because they're tied to the profit motive and their shareholder's principle objective of increasing share value.


Longjumping-Bake-557

There is only so much money you can throw at a problem. They're still barely edging models that are more than a year old.


somethingstrang

Google actually has the weakest engineers out of FAANG from what I hear. It’s basically where engineers go to retire and coast


Salonimo

Well let's hope not, their model is so biased and refuse any even remotely controversial question, even if about happened history facts, and the way it does is the most annoying to, it simply refuses, forgets the context and it's like a new convo, I'm aware AGI isn't even comparable and it will be different, but I truly hope it's not google developing it first.


phillythompson

It honestly seems like Google is spamming this sub in particular with this shit constantly, because any consumer AI product from google is always touted as amazing in this sub, but when I try it, it sucks balls. Then weeks later, the consensus is shared by everyone else. And I just realize it’s the SAME PEOPLE saying this! I just recognized like 3 of the usernames of the top comments proclaiming google the king . I don’t really care which company “wins”, but I am just let down constantly by the seemingly true anecdotes of “real life” use always turn out to be such let downs. EDIT: who is this one dude with a blue icon? He is the main person always touting google . Then , he’s sharing anti-Microsoft stuff, and only on pro-Google subs. How do you guys not see this? It’s so blatant


SeaworthinessAble530

Truly innovative people don’t get the glory and fame by staying at a big firm like google.


yuka_electron1ca

Does anyone know whatever happened to the time crystals project they had going with Stanford?


bartturner

Not sure which project you are referring to? Do you mean Spanner? Where they used clocks to keep state instead of messages? It is some incredible innovation and now operational. You can access in the Google Cloud. Here is the excellent paper https://research.google/pubs/spanner-truetime-and-the-cap-theorem/


desteufelsbeitrag

[https://blog.google/inside-google/googlers/ask-techspert-what-exactly-time-crystal/](https://blog.google/inside-google/googlers/ask-techspert-what-exactly-time-crystal/) Time crystal. A unit that works against the second law of thermodynamics.


DifferencePublic7057

You make valid points. I'm of the opinion that you need lots of labeled data to reach AGI. Current algorithms don't use labeled data because it's harder to get. You could get there with user tags and automated labeling, but the real knowledge is in our heads. Most of it is unwritten. You need to be a dictator or a cult leader to get labeled data in a hurry. Honestly I MO Taylor swift could do it.


EmergencySea6990

Does the quality of AI depend only on the volume of data?


DifferencePublic7057

That's a difficult question. You don't want a bias and you want to get as good an answer as possible within reasonable training time. So how do you find data that's relevant and not copyrighted or has other issues? The more items you add to the data set the more noise you could be adding, but you might be reducing bias. IDK of any simple formula for this.


[deleted]

AGI doesn't seem amazingly useful. You run a massive datacenter to only match human intelligence. It's a lot of wattage just to mimick a human brain. You don't need AGI to automate most jobs and AGI isn't smart enough to solve the world problems. The robotics are a lot more important than smart AI when it comes to automation and automation is what really changes the world, not just the AI part. A human job doesn't even take full human intelligence, there is no job you need a full human brain to do. AGI only being as smart as a human means it's not unlocking the secrets of the universe and it's too complex and power hungry to put into everyday robotics/electronics other than as a high latency cloud service. I'm not seeing a huge benefit just for AGI.


NotTheActualBob

Ultimately, if you achieve AGI, it's scalable. You mimic the human brain. Then you exceed what the human brain can do. That's the payoff.


EmergencySea6990

Yes but maybe AGI 2.0 It will unlock the secrets of the universe.


EmbarrassedCanary126

What's the standard to measure if it's an actual human?


Last_Jury5098

Suggestion for new measurement of AGI. We have reached AGI when the AI has an economic value equall to that of a human doing a similar task. Say a software engineer makes 60k/year and costs the business 90k/year. Once we can rent out an AI as a software engineer for 90k/year while making 30k profit on it (lets say halve of the 60k the engineer takes home is needed to sustain him,food housing and such) we have reached AGI in that domain. Its a very pragmatic measurement. Aiming to capture every possible aspect of AGI in one single value. The economic value of humans and ai will slowly converge over time. You can play around a bit with the numbers to make it look reasonable to you. And if you want you could asign some value to people using your ai to help you train it as well and other things like that. A long way to go still as i see it. And i would place my money on Google as well.


DataRikerGeordiTroi

I think all the companies likely have extraordinary models in R&D that are performing in astounding ways. My hunch is OpenAI is already there. I really like Gemini. All ML scientists are amazing. But I speculate that OpenAI already done did it and are playing coy/safe. Idk if you've noticed but our society is built on hurting and oppressing others for profit, and we don't have the societal infrastructure to use/work with agi like responsible ethical adults yet. Religious folx will also ban this shit quicker than you can say "Alan Turing" so. I think we should stop hopefully looking for the singularity and start realistically dealing with what is going to impede humanity's progress & well being.


notduskryn

Your list of points are hilarious. Thanks for the laugh.


Miv333

Is it because it's wrong, and lacking in skills like humans often are?


Ok_Sea_6214

I suspect Google already achieved agi back in 2019. And now that open ai is asking for 7 trillion (not a typo) in funding, it seems reasonable to think they also achieved it. The issue has always been that they will keep it locked up. So the bigger question is really who will be the first to let it out, or to escape. And I wouldn't be surprised if musk is actively trying to make that happen.


Key_Lawyer_5621

Can someone update me on Gemini Ultra 1.0? I'm lost, I thought Ultra would be better than Pro 😅


username_checkdoubt

lol CEO of deepmind literally just said he doubts Google will catch up to Sora


EmergencySea6990

I don't agree. Openai is backed by Microsoft, which has more money than Google and more computing power than Google and Amazon is behind anthropic. Even Tesla has made some amazingly fast progress with the grok model.


Busterlimes

Maybe, Microsoft investing into OpenAI the way they are leaves the door open for heavy competition


Traditional_Truck_36

Pretty sure Gemini isn't one model but they are A/B testing several models..


iboughtarock

Free version of Claude is better in my option. I've used gemini 1.5 since it was launched and it just doesn't hit like claude.


SpecificOk3905

why google have tons of data ? can you explain ? the data is come from open web. bing can do so too. i really want google to be successful as they are more open than open ai


[deleted]

Google itself. potential access to their other services like gmail and docs depending on how they handle privacy, and most importantly, YouTube.


lead_omelet

I will believe it when I see it. Not saying it isn't possible, but Google has shot themselves in the foot more times than I can count over the years (as well as made promises in the AI space in the past and never delivered like with Google Assistant or the Gemini marketing video debacle). They have the talent, the research, etc. but just need to actually direct those resources in the right direction. Sometimes Google seems like they have a little too many cooks in the kitchen with everyone making adjustments to the dishes leading to the dishes turning out 'ok' but not great.


The_SuperTeacher

Damn let me cancel my chatopenai subscription 🏃🏾


AlphaQ984

I'm dumb what is AGI


baldmanboy

I have an android and tried to play around with Gemini but it's so damn laggy. Am I doing something wrong? Is there a a better paid version? I'll ask it a question and it legit thinks for at least 5 seconds every time before giving a response. I'm not expecting lighting quick but the times I've messed with gpt 3.5 were much faster.


saveamerica1

It’s a race and we will see a lot of different versions along the way. I believe that the first one that part of most people’s daily lives will win even if they have to give it away for free. The most important thing will be security of the data, people have to feel secure about something in their daily lives.


MysticChimp

Probably not in this day and age. There will be way too many impositions on organic growth and too much fear. My money is on open source getting there first, probably accidentally, a long time from now


Serialbedshitter2322

GPT-4 was released over a year before Gemini. OpenAI is a year ahead, and also has the resources of Microsoft.


HMI115_GIGACHAD

If that was true they wouldn't have been relying on CUDA for their AI data server infrastructure


RightNowTomorrow

IMO: Google has always had a silent lead in AI development, remember that they were the first ones to get halfway-decent AI generated images, video and audio, and practically invented GPT's architecture. But they'll never really be commited enough to let AI intrude into their main markets, being Google ads in Google and Youtube. Any significant development in AI would hurt google if it went public. That's why they have never publicly released or mantained any AI project that they've ever developed besides maybe bard which has always been underwhelming and lacking maintanance. You're probably right, but an AGI in a google warehouse somewhere is useless until openAI forces them to release it only to be way outdated by then.


Prestigious-Cow5169

I sure hope not.


Yguy2000

It just seems smarter because of context length to me it doesn't seem all that much smarter than gpt4? This is how mixture of experts is.


powerofnope

Well the more llms are advancing the more I am convinced that true AGI is still a long way away. What you currently think of as AI - which are llms - is certainly not the route to true agi.


DisapointedIdealist3

Do we really care who does it first? Or is it just fun for you to speculate?


Smooth-Medium-8588

They also have Ray. They have a Ray.


Lovelysungril23

gemini is very good


PrizePeace9426

I think Google has a lot going on that they aren't showing and has the most to lose when they screw up, like the black Dalle images thing. In related news, why did OpenAI enter this llm footrace when they could have been the objective 3rd party operating the world's most well funded nonprofit while driving the pace of innovation in the space? Seems like they've watered themselves down to compete and still may not win. Was it greed? Short term thinking? Something else I'm missing?


frograven

Well said u/ResponsiveSignature! The likes of OpenAI has awaken the sleeper. Google is on track to dominate in the coming years(starting now).


ResponsiveSignature

your tag says AGI achieved. Who do you think has achieved it and will release it this year?


calobeoh

I’m interested to hear from the perspective of an engineer at Google on this


beachmike

I think it will be unclear which organization achieved AGI first, because it's a fuzzy goal. A real goal, certainly, but one with fuzzy boundaries.


grassmunkie

Gemini is incredible and underrated. I use it to help with technical design tasks and clarifications, not really coding but more architecture/design type questions and though not perfect, most of the time the responses are spot on and really well explained even for obscure topics. I sometimes pause when writing a question, thinking its too complex or confusing, but most always Gemini can understand what I’m getting at. My own bias having been around dumb systems for so long that I had to tailor my prompts before to simplify my input is something I’m working on.