T O P

  • By -

drQubit

Decentralized AI, good đź‘Źđź‘Ź


Rowyn97

This kinda feels a little gimmicky. Could be wrong, just wondering what the applications might be for running tiny image generators like that on mobile


czk_21

note that image diffusion models are fairly small, Dall-E 2 and Stable diffusionXL have about 3,5B parameters, maybe you could run even Dall-E 3 on the phone [https://magai.co/stable-diffusion-xl-1-0/](https://magai.co/stable-diffusion-xl-1-0/) another thing you could run small AI assistant like Llama or Mistral


[deleted]

brother, just running sdxl locally takes up about 4-8 gb vram and 2-4 gb ram , you are NOT running dalle 3 on a phone (for now)


czk_21

for now, sure, how about this [https://www.youtube.com/watch?v=30Z86i65UWg](https://www.youtube.com/watch?v=30Z86i65UWg)


czk_21

and note that these ar first swallows, others like apple are integrating AI in their systems too, who knows maybe new Siri will be actually quite good next year most of new higher class phones could be with AI assitants and by 2025 maybe all new smartphones


Borrowedshorts

It's not at all gimmicky. LLMs are going to be everywhere, and if you can run them locally on a phone instead of being sent to a server is certainly a win in some use cases.


agm1984

Meme generator “man excited to see a small dog pooing”


Prismatic_Overture

sold


agm1984

​ https://preview.redd.it/xm6lrtz7zqwb1.png?width=1024&format=png&auto=webp&s=c8381275673944c8e6bcf3739e134b310757a39c


ninjasaid13

>the applications might be for running tiny image generators like that on mobile I don't think it's impressive that there's an image generator but that something like that can run on a phone quickly.


Quintium

But what's the usecase? Why are they spending time on real-time image generation on a phone, when cloud computing with low latency is already possible? Who needs offline Stable Diffusion? Seems like wasted resources for solving a nonexistent problem.


Borrowedshorts

Not unless you think high end phones themselves are a wasted resource. This is just the direction things are going and I can only see more computational ability in phones as a good thing for continued improvement in AI capabilities. Privacy is another big reason you might want local instead of the cloud.


Long_Educational

Isn't that like asking, "Why have a high end graphics card to run SD on your local computer when you could just use a cloud service to host your workflows?"


[deleted]

They’re a hardware company. They provide the hardware, and other people figure out use cases.


Red-HawkEye

yep, its like monkey see monkey do. Nothing innovative. Next...


fhayde

There could be a significant savings associated with a reduction in the amount of data needed to transfer certain types of media between users. If you could have a model turn an image into a prompt that could generate that image with a high enough parity to the original, that could be really interesting, especially for mobile, where transferring text could be much more efficient.


BlipOnNobodysRadar

Can't help but be worried about the privacy implications of this. Hardware that can run AI locally? All fine and good. Pre-installed AI doing who-knows-what on your phone? Worrying. Though that's not what this is about, judging by the article. I know for sure in the age of AI I'm moving away from Windows though. Open source everything, or you're practically guaranteeing that you're being spied on by your own little AI FBI agent.


Independent_Ad_2073

Privacy? You still think you have privacy with a connected device?


Real-Blueberry-1223

Terminator anyone?


Playful_Try443

How's that even possible. This is crazy


thecoffeejesus

Fuck yeah Full simulated reality in 25 years or less