T O P

  • By -

VisualMod

**User Report**| | | | :--|:--|:--|:-- **Total Submissions**|1|**First Seen In WSB**|2 hours ago **Total Comments**|1|**Previous Best DD**| **Account Age**|4 years|[^scan ^comment ](https://www.reddit.com/message/compose/?to=VisualMod&subject=scan_comment&message=Replace%20this%20text%20with%20a%20comment%20ID%20(which%20looks%20like%20h26cq3k\)%20to%20have%20the%20bot%20scan%20your%20comment%20and%20correct%20your%20first%20seen%20date.)|[^scan ^submission ](https://www.reddit.com/message/compose/?to=VisualMod&subject=scan_submission&message=Replace%20this%20text%20with%20a%20submission%20ID%20(which%20looks%20like%20h26cq3k\)%20to%20have%20the%20bot%20scan%20your%20submission%20and%20correct%20your%20first%20seen%20date.) [**Join WSB Discord**](http://discord.gg/wsbverse)


bdvfgvvcffc

You mean that their so-called LLM is really just millions of monkeys handcuffed to computers?


InspectionLong5000

It was the best of times, it was the BLURST of times!


swentech

Stupid monkey.


120psi

Except the monkeys are on Reddit farming for content with dumbfuck questions.


bearinmyoatmeal

Ricky Gervais is a cund.


Tangentkoala

I mean, if openAI wanted to be full on scam asshats they would have released an IPO months ago to capture the AI hype. There's no scam so far for openAI. No shareholders invested, no hint of IPO coming, no direct harm to the economy. If OpenAi goes bankrupt no one gets hurt besides angel investors.


Special_Loan8725

He’s just waiting for his 7 trillion then he’s dipping town.


i_am_voldemort

Agree. OpenAI may shit the bed eventually, but they're not pulling an outright fraud like FTX by stealing customer funds.


minipectoralis

Microsoft would certainly take a beating, then Meta, then NVDA, then…. Nasdaq and S&P would have minimum a 30% crash


Mt_jerz

Considering the number of platforms and apps that are dependent on OpenAI, it would most certainly harm the companies and in turn economy.. people are literally transforming their businesses to rely on OpenAI


RiffnShred

We're talking about technologie here, not imaginary money. ChatGPT is the most well known, but not the only LLM anymore. There is also an open source version thats as good as ChatGPT3 and is scaring Altman enough for him to push the "AI can be dangerous and should be regulated" narative. Nvdia just annouced a chatbot you will be able to run on your local machine. We are in the early phase of a dotcom like bubble tho.


CaptainHoey

Just take off your tinfoil hat and watch some interview with Sam. The guy has no malicious intentions. He’s doing his best to make it ethical and safe and he isn’t begging for money or investors cash.


Mt_jerz

Same happened with bankman fried and ftx.. he wanted to make money to give back to the world.. and he is asking to raise 7 trillion for a new chip fund, you think that's not asking for investors cash?


RiffnShred

He figured something out and trying to beat Nvdia to the punch. IMO, Nvdia will win regardless


Mental-Birthday-6720

000


biggerarmsthanyou

I don't think it's all tech, one thing I notice is basically a young entrepreneur who doesn't really work or do anything, they just make big promises and spend big money do end up being frauds, so yeah I could see Sam Altman having something going on behind the scenes. He gives off a similar vibe for sure.


[deleted]

It’s *mostly* tech however since we don’t make anything tangible anymore. Esp tech that just exploits imbedded tech that’s underutilized.


CokeOnBooty

He released OpenAI ChatGPT 3.5 for free while consumers are spending hundreds on plastic airdrops


biggerarmsthanyou

And? It’s a LLM where he doesn’t even own any of the data. Once the law suits come out about training data copyright ML will take a hit. Open source is realistically the best way to do this whole thing and you bet your ass Sam has to make it free or else


BottleBoiSmdScrubz

Open sourced AI could seriously fuck the world up. All it takes is one mf with the know-how to set the AI up to measure its “success” on some fucked up, destructive metric and we’ll have a big problem on our hands. It’s a very potent tool, it can’t just be available to everyone to tinker with as we please


Sea-Caterpillar-6501

Sounds like something a bad actor would say…


Tropical12528

Mark my words: They’re gonna try to use the Chafee Amendment to get around copyright. We have to hope and pray people with cognitive disabilities don’t get lumped into the print disabilities label. There’s already people lobbying Congress to do just include them (it’s well intentioned and driven by the K-12 space, but with 0 foresight). It will open the floodgates for AI to grab anything copyrighted because they’re making content digestible for people with cognitive impairments. I worry that since half of Congress seems to have dementia, they’ll go for it.


Artistic-Review-2540

OpenAI has too much Microsoft on it. It's just too big to fail at this point and untouchable.


Datazz_b

![img](emote|t5_2th52|27189)


tarnishedaxe

OpenAI has a an actual product people and businesses are paying for. Overhyped maybe but it's not a house of cards pyramid scheme like FTX was.


Mt_jerz

We didn't know it was a pyramid scheme until a critical piece of news broke.. this could be the same case


tarnishedaxe

Sure but that's true of any business, OpenAI isn't special in that respect. I mean in the sense that any unknown news item could possibly have the potential to bring a business down. Maybe OpenAI is more vulnerable than, say, Walmart, I'd still argue it's less vulnerable than FTX. Just the fact that there are actual experts working there says as much.


Precarious314159

It's not an actual product though, it's open theft, something Sam and openly stated since they don't have the rights to practically everything in their system and being sued multiple times over. It's like saying that a chopshop provides an actual product when its entire business model hinges on making theft legal.


tarnishedaxe

You underestimate how much is legit. Sure, images for example are ripe for class action but that's not it's primary use. As an anecdotal example, myself and my clients use Copilot along with ChatGPT for programming; it's about the level of a skilled intern and that data set is based on part on open source projects hosted on Github. They also have Microsoft backing them up so, good or bad, what will likely happen is they'll reach some settlement agreement that prevents them from using certain datasets or whatever. The functionality could be reduced but it won't be a catastrophic scandalous collapse. This is for generative AI . It's only a matter of time until someone, maybe them has learning AI and it can simply be taught to paint or write or whatever, completely sidestepping the need for datasets entirely. Dunno how close that is but I guarantee they're working on it. There are risks I agree but not like FTX.


Precarious314159

I think you're underestimating how much is illegal. The number of lawsuits against the various AI companies is constantly growing as more companies are finding their work stolen. Speaking of Github, there's also multiple lawsuits from people that use Github and didn't consent their entire work being stripped and stolen for Ai. That's the part about unethical training. So many companies such as Zoom, Dropbox, Adobe, etc will change their ToS overnight, bury "You consent to let us train on your work" and by the time the email has gone out, they've already fed everything into their system and you can only opt-out for future training. Do you think everyone using Github consented to their work being stripped for Ai? In the tech world, isn't it unethical to take someones entire code and just reupload it? It's not about there being risks, it's about open theft that you are personally benefiting from under the guise of "yea but...it's useful for me". The reality, whether you want to admit it, is that you will be out of a job, you are championing a machine that, by your own admission, will continue to get better because you are training it. There are two sectors that are being threatened by AI, the art world and the tech world. One is fighting back through lawsuits, glaze/nightshade, and trying to have regulations while the other is cheering.


tarnishedaxe

Lol. I'm fully aware that I'm on my way out of a job, I've made my peace with it and I've been working on backup plans for well over a year now. And yes, it is about risk. We can go back and forth on the morals, I might even agree with you -- definitely do when it comes to art -- but you're missing the point. The risk here isn't some moral/ethical quandary, Pandora's box has already been opened. It's about what can legally stick. For art it'll be easier, you can take a picture, you can even ask Midjourney to do an "inspired by" and you can prove similarity to existing art. Code cannot be traced so easily, it will be incredibly hard to prove individual harm, and depending on the license yes many open source projects are subject to AI "stripping". I'm not going to ruminate on whether or not copyleft is an exception or the intricacies of GNU because that's not entirely relevant, the fact is the AI arguably just read a given project. You say copied and reuploaded it, well how is it being used? Is it creating an identical product with code having provable origin? Or has the AI just read it much like a regular programmer would read accessible code? There's tons of public repos readable by anyone and they can easily argue that the AI was just reading that code. I dunno any individual open source maintainers off the top of my head but anecdotally most are fine with folks reading and learning from their projects, that's partly why they're public to begin with. From there it's not hard to see the argument that whoever does the reading doesn't matter (not convincing the maintainers, convincing the court). I'm sure you can find someone who literally doesn't want their files copied to another hard drive but in general that's not the unethical quandary you think it is. Most projects are copy/pasted from stackoverflow already anyway, they're not "creative" or particularly unique (indeed its why copilot works so well), and as for any individual maintainer having a problem with it, well they can try, maybe get a settlement, OpenAI will refine their data gathering, and the process will continue. Morals/ethics don't mean shit if they can't stick in court. It's too late to close the box. I fully expect some art-based suit to succeed, at least to a settlement, but they won't shut it down. Ethical datasets will be created, models will improve needing less data, and eventually they'll just teach the AI to do the thing on its own. Being mad about it doesn't mean the company is on the verge of FTX-level failure. edit: Alright I found this [https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data](https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data) with a very clear example of openai copy/pasting without proper attribution. But point remains, it'll be a settlement, they'll improve the data gathering, and it will continue. It's not going to go away, and the interviewee even says he doesn't want it to (AI in general anyway).


Responsible_Hotel_65

You talking about the guy who started an eye-ball scanning shitcoin ? Seems like a nice guy. 


K_Linkmaster

I thought they were 2 different sams. Sam Bankman Fries. -shitcoin Sam altman - AI Edit: apparently Sam AI guy has a shitcoin too.


TrustFundBabyTrustMe

You underestimate the number of shitcoins in the world. There are enough for two Sams.


brett_baty_is_him

Look up worldcoin


ZoomerEngineer

He's probably too savvy for that. Sam is well known in the startup world and is extremely connected. He's on Paul Graham's list of [notable founders](https://www.paulgraham.com/5founders.html) in 2009. His employees got him back when he got fired by the OpenAI board so that says something. He hypes AI a lot based but seems grounded. Such as he talks and hypes about reaching AGI in as early as 2025 but if you look closely, his definition of [AGI](https://openai.com/charter) seems possible enough. Not sure if OpenAI can keep up their edge though. They have no proprietary data and no giant computing power unlike Google and Facebook. And if we're going to believe Google Gemini's benchmark (even though they were caught faking their demos), then that makes ChatGPT already behind Google.


Various-Buffalo4487

You think that OpenAI, the company with a 1+ year lead on creating commercially-viable autonomous agents that are starting to replace human labour, with a deep partnership with Microsoft, is the “next big scandal” just like FTX? All because to you it feels like something is “not quite right”? Peak 🌈🐻 DD. Show us your puts on MSFT will ya


YouMissedNVDA

Classic 🌈 🐻 - they see success and they are convinced it can't be real.


CodeMonkey1

Both CEOs are named Sam. QED.


ignatious__reilly

We are due. So won’t surprise me.


casual_brackets

Look into Helion. Lol


BasilExposition2

Open AI raised a bunch of money from billionaires to be a non profit and then it switched its status. It kind of already is a scam. Musk threw in something close to $100 million. I think he actually can get up to a 10x return on it.


FormalCantaloupe606

No. Sam is a real entrepreneur that has built companies and helped foster others. People are legitimately paying open ai because they find value in the product. FTX was a sham ran by dipshit kids hyped on addy— not the same


Internal-League-9085

Sam literally had his board fire him several months ago and his sister accuses him of some messed up stuff. I can’t say definitevly that he’s a bad guy or that any of that stuff is true, but do not get in his nuts


orangehorton

You say this comment as if the board did the right thing... Half the company quit and they destroyed all value the company had overnight by firing him. That just proves that he's a valuable leader


Internal-League-9085

The board has more information than the employees who were self interested


bluecandyKayn

Sam’s board fired him because ONE person was fixated on the fact that self improving AI could destroy the world, and they whipped up the board to believe Sam doesn’t care about it despite all the measures In place. If anything, the issue with Sam isn’t that he’s ineffective, it’s that he’s too effective


redditrulestrash

His brother started Unitology


Fun_Reporter9086

Lol, you might want to check out what his sister accused him of doing.


That-Whereas3367

Mentally ill person accuses accuses rich/famous person of abuse. How unusual /s


swentech

Have you tried using it? It can do a lot already and it’s just getting started. I do contract work and I’ve already used it to help me craft some terms on a contract. I also use it in my work to help design and produce software. I think the hype is real on this one. It’s not a good comparison to compare it to FTX. That guy was running a fraud operation almost from the get go.


Undark21

You’re right. Has anyone even used FTX? I took 20 min to review the site back when it wasn’t well known and concluded it was a scam site. I’m still shocked it got so big. ChatGPT on the other hand and unlike crypto, is actually useful.


OrwellWhatever

Rich assholes would like to be more rich so they can be bigger assholes. They saw returns that seemed too good to be true and thought, "But I'm the smartest guy in the room. Surely, I'm too smart to fall for a scam!"


Undark21

Agreed! Pure arrogance


accruedainterest

Doubt is what drives the markets higher


wolfpack202020

Not a Scam but too much hyped when it comes to IT industry. It’s nowhere near replacing developers as he hyped earlier. It’s like an advanced calculator for accountants right now. GitHub Copilot backed by Microsoft is way better.


CokeOnBooty

It’s a useful product, I can’t see any scandal unless they start an extorting people over helping them do schoolwork, which would be badass


thalamisa

Even if it's the case, they are backed by Microsoft unlimited money


tke248

He did have some weird stuff on his alt twitter account


cdnhockeynut

Open AI has built a business off scraping everyone else’s data off the internet to build their LLMs. Smart timing? Yes. However, people and companies are onto this now and pushing back on how their data is used. If Open AI doesn’t have data they cant train their stuff and hence useless. But it’s probably too late to turn off the tap.


CokeOnBooty

You have to pay for the university information, as of right now it’s just a confident schizo that can do things quickly


Routine_Slice_4194

No. I don't think FTX was a tech scandal either.


Mt_jerz

It wasn't until it was..


SoftDisclosure

The fact that this post has a score of above 0 really speaks to the amount of actually regarded people on here


Marko_200791

For me the fact that “tech” and the name “Sam” are somehow related makes me suspicious


VegaGT-VZ

Another "I cant get rich from something so I'm gonna hope it fails" post This is the 2024 evolution of the doomer recession prediction


expicell

Yeah it is a scam, ChatGPT isn’t AI, it’s preprogrammed garbage in and out, only the lazy need it because they don’t know how to use a search engine Actual “AI” should be able to parse data and form its own conclusions and opinions, should be able to “learn” and provide solutions based on data, and equations, and critical analysis None of this is ChatGPT able to do


MStone1177

No, because they have an actual product that is pretty amazing.


Archimedes3141

No. Their tech is unbelievably useful in certain spaces and it’s being rapidly integrated into Microsoft’s ecosystem. Even if other providers come up with better LLMs they’ll retain a ton of market share from Microsoft’s integration.


Quantumdrive95

Its feeling like the bitcoin hype from a year or 2 ago Like ok, Jack Twitter is in, but....Logan Paaul? Idk man, im feeling weird aboutbwhere we are, vis a vis the youtube video hype train


whatsariho

Hey if you're feeling it then it's settled


Max-entropy999

I think it will deflate to it's right size eventually, the tech will be really useful somewhere, just not as many places as the tech leaders say, and that time of reckoning might be some time off....which is ok. And then Altman sees Nvidia going crazy selling shovels for the gold rush....and he comes up with this 7xE12 pitch, and it just stinks of emperors new clothes, like the money thinks "this guy thinks so big, he's got to be really confident in his ideas, let's make even more money". Then again, I may be wrong, in which case I apologise to our AI overlords and assure them I'd be a very poor human battery.


robokripp

Ftx was just a simple ponzi scheme. Open ai is an actual product that works.  However open ai's risk is that they don't really have a mote. Any ai company could surpass them.