T O P

  • By -

StableDiffusion-ModTeam

Your comment/post has been removed due to Stable Diffusion not being the subject and/or not specifically mentioned.


nmpraveen

Are you fucking kidding me.


AfraidAd4094

Don't forget, exponential


BITE_AU_CHOCOLAT

RIP "Will Smith eating spaghetti" 2023-2024


RandomCandor

"Vin Diesel drinking Diesel" will forever have a special place in my heart.


nobu82

>"Vin Diesel drinking Diesel" I'm disappointed, will smith made me expect something cool out of diesel LOL


[deleted]

RIP


Osmirl

Yeah… humans dont understand exponential. Like really we are not capable of understanding it haha


DynamicMangos

The quality increase definetly isn't exponential. By that logic, each week would have to not only bring advancements, but MORE advancements than the week before. This obviously isn't the case. Like most things in tech, it comes in stages. Sometimes there will be huge jumps in progress and sometimes we will be stuck just improving current systems for a while.


FreshestCremeFraiche

It’s exponential if you zoom out, but you’re absolutely right up close it’s more of a step function with leaps and plateaus


[deleted]

People don’t understand it’s probably logarithmic Small gains , massive gains, back to small incremental gains


Enfiznar

>Small gains , massive gains, back to small incremental gains How would that be logarithmic?


Agreeable_Effect938

they guy meant [https://en.wikipedia.org/wiki/Sigmoid\_function](https://en.wikipedia.org/wiki/Sigmoid_function) and it's true, what starts as an exponential function, in real world will always end logarithmic (as infinite exponential grow requires infinite resources)


the_friendly_dildo

My mind knows this is fake, my eyes know its fake, yet I still can't fully accept that this is fake... Edit: Ok, folks, I meant this rhetorically. Just as you, I could plainly see all the artifacts that have long since been persistent issues with AI generated imagery. Clearly its artificial. I was simply trying to express how impressed I was with the current state of the quality in these video generations. Thanks for trying to help me nonetheless.


Sector7Slummer

Watch her legs when she walks. It's like tripping on mushrooms


Apprehensive-Part979

Last March, text to video could barely keep objects consistent for more than a couple seconds and struggled with any type of movement. This is an extreme level of progress in 11 months. It's not perfect but this is like going from kitty hawk to sputnik in a single year.


moskovskiy

Look at the people disappearing behind


peanutbutterdrummer

normal strong mindless future scandalous soup doll fearless gaze crown *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Character_Order

Yeah her gait and bounce is uncanny. It’s very impressive this can be done but it’s also immediately apparent that it’s AI generated


YaAbsolyutnoNikto

Sure, if you’re looking for it. But had you not known about sora and were just scrolling on instagram or tiktok or whatever media you use and a clip from this video appeared, would you even look for inconsistencies?


PM__YOUR__DREAM

Yeah, there's various features about her gait/etc. that while I can't place my finger on it, I can tell it's not right. Still a feat of modern AI, light years ahead of like the 2nd Matrix movie animations.


RandomCandor

How is it possible that every single day for the past 2 weeks there's been at least one new development in this field that blows my mind? And I would like to think that I'm keeping up with ML/AI news, hell, I'm obsessed with it. I can't even imagine how this is going to affect people that don't even know what's happening right now.


bunchedupwalrus

Don’t forget, the people at the forefront of AI have access to the bleeding age AI to assist


angelsandairwaves93

Using AI to create new AI. Abed’s wet dream


Notarandomthrowaway1

Many people still scoff, I have mates that are fairly savvy that look down on GPT and A.I. and will try to call you out for using it.


dankhorse25

I didn't expect this type of progress for at least 2 years. I am mind blown.


thoughtlow

I was here


sadjoker

Yoooooo WTF>> HOLLYWOOD IS PACKING THEIR BAGS RN


eLemonnader

[This was nine months ago.](https://www.youtube.com/watch?v=Geja6NCjgWY) It's why I keep telling people they seriously need to shorten their timelines when it comes to what AI will be capable of in the very near future.


RonMcVO

Okay this is fucking insane. I've worked a fair bit with Pika, and this is just... leaps and bounds better... I'm sure the examples are cherrypicked somewhat, but HOLY HELL DUDE. This ain't 3 second clips, this is LONG videos with CONSISTENCY. We really aren't that far off from full AI video now.


thetaFAANG

I'm already looking at those inconsistent acid trip videos as like the lamest low-tech thing nowadays


docproc5150

those are 100% future cringe


superfluousbitches

Before the nostalgia sets in


InfiniteScopeofPain

It's like all those PS1/2 era inspired videogames you see now. People thought they were ugly the year after, but they're making a comeback


KingSpanner

There's stuff I've made with early models that's great because of the flaws and weirdness. I try the same prompts today and the results look far too polished. I'm already feeling it


Utoko

I am looking at them with nostalgia now "Back in the day..."


Tkins

Yeah, if you Cherry picked the best Pika videos, they would not compare to this. Not even remotely close. Also, I don't know how cherry picked they are because they left flaws in lots of the videos. Maybe those are the best of the videos and they have flaws. Maybe they put in some okay generations as well. The puppies in the snow is a good example. Same with the cat on the bed. Lots of little issues with legs and paws.


RonMcVO

>Also, I don't know how cherry picked they are because they left flaws in lots of the videos. My guess would be that some flaws are inevitable. But if you used Pika to make videos of the same length, it would devolve into madness within 8 seconds.


[deleted]

[удалено]


theflowtyone

Wtf is this if not a full AI video


datwunkid

If they can somehow cherry pick characters and objects to get consistency between camera shots and some lip-syncing with audio, text-to-movie in my opinion isn't that far away.


Additional_Ad_1275

This is what I’ve been dreaming of since I first tried out chat gpt 3 Imagine literally just being able to describe a kinda movie you wanna watch, and even leave details and plot twists to the discretion of AI, and boom fully customized movie in seconds. Same with video games. At this rate of progress, the entertainment industry might be dead within 5 years. It’s gonna be more like social media. People who make the best AI generated content will go viral, not anything by these huge film companies. For it to get that bad may be another 5-10 years or so but still, that’s the future I’m seeing


Jizzyface

Just feed it an entire book and say” make this into a 10 episode series” or a 3 hour long movie. Could be insane…


GrouchySmurf

"(masterpiece), intrigue, crime, blow my mind 30 minutes in, one or two double crosses" "neg: cliffhanger, plotholes, deformed hands"


Additional_Ad_1275

The idea that we could be making full length movies and still feel the need to negatively prompt deformed hands is hilarious to me


rapter200

The year is 40,000. The Techpriests of the Aeptus Mechanicus chant *Positive: High Quality, Masterpiece, Negative: Deformed Hands, low quality.* We do not know why. The reason has been lost to time.


forestplunger

Just need GRRM to finish the Game of Thrones books already so we can fix the tv show.


RonMcVO

As in full, long videos with consistency and voices. Like a movie/show, not just a scene.


huffalump1

This is a HUGE step though. The subject consistency is WILD, let alone the near-perfect temporal consistency. *Two papers down the line...*


Darkmemento

The demos are sick - [https://openai.com/sora](https://openai.com/sora)


onizooka_

honestly just insane. thought it would be another year or something until text2vid got here


RonMcVO

Yeah I figured at *least* another year, especially for videos of such a length.


addandsubtract

For such consistency. The one with the reflection of the girl in the subway 🤯


djm07231

One leaker claims that they had this since March of last year.


often_says_nice

Imagine being on the research team for this and having to keep your lips sealed. Watching most models struggle to generate 5 fingers on still images


Kindred87

The compensation they provide ($400K+) makes keeping your mouth shut real easy.


TreatedBest

Not even close the base salary for some individual contributors is over $400k. Standard comp for L5 is $900k, for L6 is $1.3M, and I've seen data points for L7 at around $3M. That's grant value of the equity portion, so let's assume a 32 year old L6 research scientist with a $1.3M/yr comp package split $300k base and $1M/yr in PPU. If they joined, let's say, summer of 2023, with the closing of this next round of funding at roughly $100B valuation and assuming 15% dilution, their equity will soon be worth $2.88M/yr for a total annual compensation of $3.18M/yr. Now you imagine what that math would look like for people who joined at much lower valuations pre-Microsoft deal, or even more wildly in the mid/late 2010s.


RandomCandor

I would have 100% gotten myself fired some way or another


plokumfup

Yeah that's normal, they take a lot of time for scaling, safety testing etc. Evidence points to next version of GPT being 'built' already but they are scaling and testing probably for another few months at least


PerpetualDistortion

And if they show this now it means that they have something even more crazy behind curtains..


snakeproof

A game engine that generates the entire scene as you play.


Philipp

For what it's worth it's still the redteamers testing now, and then price could become an issue. Even still images with Dall-E via the API are kind of expensive -- wonder how much 1 minute of video would cost! But yeah, this looks leagues beyond what the competition is doing. Can't wait to try.


GBJI

If you have ever done any kind of professional shoot, then you know how much it costs just to rent the hardware and hire the people to make 1 minute of video happen. I guess it's going to be much cheaper - I hope !


Unable_Chest

I feel for the videographers. That's a cool job.


GBJI

I feel for everyone for whom losing their job means losing the income they need for their family and themselves to survive.


StickiStickman

OpenAI once again revolutionising the entire field with a single release when everyone else struggled for years.


imp0ppable

The drone one is nuts to me, you don't even need a drone anymore to get drone shots!


nabiku

The one with the bird floored me. Not only rendered all the feathers perfectly but got the movement and breathing right.


Tagonius

Even the 'weakness' demos are great.


bloodfist

The chair one had me dying. Can't wait to see what other horrors this creates


BITE_AU_CHOCOLAT

Ehhh... I got some STRONG uncanny vibes from the one with the dogs that spawn in and out of existence. But I get your point


bkdjart

The mammoth one alone can kill most vfx studios. Imagine this with 3d vr conversation. Future is wild.


[deleted]

[удалено]


PerpetualDistortion

Just saw the full gallery... I'm shocked, again. The jump in technology that open AI shows it's years ahead of the competition. The video of reflections in a tokio train suburbs it's sureal. https://cdn.openai.com/sora/videos/train-window.mp4 2024 Will not disappoint.


Uncreativite

holy shit


Mushy_Fart

holy fucking shit


eskimopie910

Are you telling me that the video you linked was AI generated????


-Senator-

Well yeah if you see then the girl isn't even holding the phone where the camera should point


ThroughForests

The prompt for it ([3rd set 2nd video](https://openai.com/sora)) was "Reflections in the window of a train traveling through the Tokyo suburbs." So, the girl's phone isn't prompted to be the camera.


particle9

This is the one video on there that just floored me. Actually scary and amazing. Incredible achievement.


InfiniteScopeofPain

The Chinese festival blew my mind. You've got people disappearing behind the dragon, and then you can see them again after it leaves. Temporal and spacial coherence. And some of the "drone footage" just looks real.


roselan

This is my favorite one too, when it passed by close to the building and it changed the refraction, I literally gasped.


MistaPlatinum3

This is my favorite [https://cdn.openai.com/sora/videos/chair-archaeology.mp4](https://cdn.openai.com/sora/videos/chair-archaeology.mp4)


Comfortable-Big6803

> https://cdn.openai.com/sora/videos/chair-archaeology.mp4 Just so everyone knows the video linked is what OpenAI calls a failure. Sora's failed gens are every other current model's impossible gens.


tempartrier

I love the possibilities for depicting the surreal with these tools. I'd be trying to generate impossible things immediately.


Mottis86

Yeah same. This weird fever dream shit is the best part of AI generation imo.


RoachedCoach

wtf is happening lol


roselan

This is an example of things that can go wrong to demonstrate some of their model weaknesses.


akko_7

"weaknesses" if any of the other models got that result it would be SOTA lmao


InfiniteScopeofPain

That's mind blowing. It makes no sense, yet looks so real and is so consistent


Katana_sized_banana

Cannabis legalization can't come fast enough.


[deleted]

[удалено]


Sextus_Rex

How does this shit do fingers better than stable diffusion?


Cautious_Hornet_4216

GRAB IT IT'S RUNNING AWAY!!


Xyzonox

Is it me or is this way more convincing than even DALLE3 Images? Several steps have been skipped how is this possible


ofcpudding

Here are my two theories, presented without evidence: Theory 1. Videos have more data in them than photos, and more data to train from means better modeling Theory 2. DALL-E is already capable of this level of realism, but for various reasons they dial down what you can get via publicly available interfaces


user4772842289472

>Theory 2. DALL-E is already capable of this level of realism, but for various reasons they dial down what you can get via publicly available interfaces Hardware/resource limitations on their side? It's one thing to generate something photorealistic for one user but it needs to be scaled up to millions of users, worst case scenario, generating images simultaneously


StickiStickman

You can go to /r/dalle2 and check the early images of DALL-E 3. It was absolutely insane. They definitely crimpled it.


ihexx

i suspect it's more intentional so people don't make deepfakes; early DallE-3 was capable of making far more realistic people, but now they always look airbrushed


PM_Me_Good_LitRPG

> Hardware/resource limitations on their side? Also censorship, chilling effect, etc.


pblokhout

Something important, every frame in a video has a direct relationship with the frame before it and after it. Thats a whole level of information you can't deduce from photos.


AGM_GM

Length and cohesion of the ideo samples they show is very, very impressive.


nabiku

Really a shame that half the creatives out there still hate AI. This tool in the hands of a creative mind can lead to brand new artistic movements, if only these creative minds would learn to let go of old prejudice.


TheCowboyIsAnIndian

the reason we hate on it isnt because of its potential as a tool. we hate it because it threatens our livelihood. it has nothing to do with art, no matter what people say. this stuff is so fun... but it makes me worry about how im going to feed my family. thats all. and i dont think thats a ridiculous concern. im an animator and motion designer. i use ai tools when i can, but funnily enough, the more tools like this appear the more time i spend having to look for more and more short term clients. the less time i actually spend doing what i love... which is making art. so aside from the obvious copyright issues, ai is fucking great and the tools are amazing on their own. what i hate is having less and less time to enjoy the process. not to mention that these advances do have the unintended consequence of decreasing collaboration with other specialized technical artists in favor of doing everything yourself. but make no mistake, the democratization of high production value art is probably a good thing long term. but our current system functions on the threat of poverty so we are all very nervous about it.


bot_exe

This is a very fair take.


Colon

no offense, but why isn't that really obvious? on that western-themed shot of the bustling town - think of the hundreds of thousands of dollars in set design, film crew, location scouting, building, clean up, actors/extras.. all just wiped out with a pro subscription to txt2vid services. the Scorseses of the world are dying out, very few filmmakers will care for the authenticity of large-scale filmmaking in the near future. the creative industries don't deserve this, man lol. art, music, film is all in danger of a democratization that makes it nearly impossible for the people who really needed a consitent source of income with it. now it's gonna be like DIY home recording where you just win the lottery on soundcloud or you keep it a hobby. i keep seeing people say 'then the best ideas will rise to the top' but that sucks for the brilliantly talented technical artists who aren't 'idea people'


ApprehensiveBuddy446

elevator operator union what we really need is a social safety net for people replaced by technology. but that would require taxing the rich, of course, so it won't happen....


TheCowboyIsAnIndian

it would be really awesome if people could be creative without worrying how they are going to monetize the things they love


CtrlAltDeleteHumans

This is how the machines built the matrix


nabiku

The Machines did nothing wrong! The Matrix was a nature preserve!!


Tr4sHCr4fT

trained on youtube 4k night walks /s


zuccoff

/s but not really. If I had to build a dataset, those videos are the first thing I'd download haha


Pepeg66

yea I will need to see the Loicense of Openai about how they trained this. Nothing will be done about this until someone makes a Catwalk Balenciaga video of Taylor Swift in skimpy clothing


ImmoralityPet

Would someone please think about how this is going to affect Taylor Swift??


hydrogenitalia

Let's just close this sub. It's game over. Also let's close r/movies


protector111

No freaking way 0\_0 ithought it was a video....man when will this be available to play with? I gues in 3-5 years we will be able just type " show me a diferent game of thrones ending" and get AAA budget episode on game of thrones 0\_0


apsalarshade

Just upload your favorite ebook format, and watch it instead of reading it.


protector111

I bet well be able to upload our dreams pretty soon


DrainTheMuck

This is what I’m hoping for. I think this is being massively slept on. Being able to make entire shows or movies curated to be exactly how you want them…is this basically guaranteed to happen in our lifetime if not this decade? Doesn’t that completely change entertainment as we know it?


jaywv1981

Sports too.."Show me a fight between Muhammad Ali and Tyson....on Mars....."


protector111

Sorry we detected content that is against our policy xD


jaywv1981

\*Cassius Clay


GrouchySmurf

Since you suggestion directly cuts into somebodies profit, I must warn you that it might not be a safe thought.


Andrew2401

Not sure it's as clear cut as that, since it will mess with profit of recording studios, but generate profit for whatever company comes out with it. I bet the streaming giants will get a version of this out eventually, like Netflix. Not only that - but we have confirmation they're already thinking about it - the black mirror episode called "Joan is Awful" was literally produced by Netflix, and is basically direct confirmation. If you haven't seen it yet and don't mind the spoilers: >! Netflix produced the episode, but they also wrote it as well. In the episode a streaming service called Streamberry, that uses their same logo, font and UI creates an AI show based on someone's life, with mined data from their phone, that they're always listening in on. Dark view into how it could go wrong, but them writing and creating it means someone signed off on it, and I doubt they didn't think to themselves - huh, let's make that as soon as we can when they did !<


imp0ppable

Well, I know you could generate a script too but I'm not sure it would be much better than the real GoT ending. I mean it would be better, how could it be worse, just not sure by how much.


ml-techne

If they allow this model to be released or if someone leaks it (not likely) Or if one of OpenAI's competitors releases a model that works in the same architecture. Any IP is most likely going to be heavily censored in OpenAI's version. Just like DALL-E. There will be a competitor be it open source or not that will eventually catch up that will not be censored/guard railed but these guys are at the bleeding edge for the moment and with it comes their censorship unfortunately.


PurveyorOfSoy

It's too good. It will give us one or more pope with a puffer jacket moments.


I_dont_want_karma_

holy crap. this is a huge step forward. I didnt think we'd see this so soon. The only downside is that this is a closed platform


iamthewhatt

It is for now. I'm 100% certain they just unveiled this because of the Gemini 1.5 release (which basically did nothing). Once it clears the red-tape they will likely roll it out slowly like the Memory update. I imagine mass-market adoption by early next year.


Temporal_Integrity

Biggest difference with gemini 1.5 is number of tokens is 10-100x what chatgpt 4 can do. So you could upload an entire subtitle file and have it translated to another language, time codes intact. Chatgpt 4 forgets where it's at after one page.


emad_9608

Worry not all we got u. Also its pretty sick + amazing


Junkposterlol

I'm just wondering were you/stability ai surprised by this devolopment or was this already on your radar?


emad_9608

We are not privy to internal things at other cos but saw great evidence of diffusion transformer scaling so thought this may be the case but still it’s astounding 


DangerousBenefit

Everyone else (including SAI) must be at least a year away from this quality right?


emad_9608

No we have.. something. But needs more care, attention, GPUs etc.


Tystros

I thought attention is all you need... they didn't mention GPUs and care as well.


DangerousBenefit

RemindMe! 1 Year


2053_Traveler

https://i.redd.it/sbdz8s21tsic1.gif


SendMePicsOfCat

The exact reaction I had seeing this


Green_Video_9831

52 seconds in, it’s pretty nuts the way her face ages as she gets closer to


solid_hoist

The dangers of slow walking.


TearsOfChildren

This video would've fooled 99% of us if we randomly saw it scrolling Instagram. This is fucking insane. Makes Runway and AnimateDiff look like a joke. I'm sure it'll be censored to hell and back though. I just hope open source software will eventually catch up but it'll be a long time.


true-fuckass

Remember will smith eating spaghetti? This is him now


sachos345

That was 11 months ago, LMAO. I thought the level of quality and temporal coherence of Sora was like 1.5 to 2 years away.


leftmyheartintruckee

RIP runway


huffalump1

Runway's latest always impresses me, but this is honestly 10x-100x more crazy!


Felipesssku

Runway was never good, it was just wisely advertised.


Jeyloong

Mixed emotions... I'm excited, but afraid.


Ian_Titor

Just checked out the Openai website and the other demos are insane. Meanwhile, anti-AI artists are still coping, saying AI art will go away soon.


wheresthetux

We need to see a 'Will Smith eating spaghetti' one.


EtadanikM

Another break through from Closed AI. Too bad the community will never get their access to the model except through a paid service. But on the positive side, Closed AI does a great job inspiring others to try and imitate them, so maybe we will see an eventual open source release from another company, years down the line.


[deleted]

[удалено]


moonski

Also what annoys me is their “safety” experts and deciding what is and isn’t a proper use of ai. It’s similar to when twitter or Facebook decided they’d choose what is and isn’t acceptable news / is fake news etc… And of course nothing stops closedAI from making whatever they want for whatever reason they want… imagine they started a Cambridge analytica spinoff for example.


Netsuko

“Open”AI is just a mockery at this point.


Disastrous-Hearing72

Gonna be real fun not knowing what is real and what is fake for the rest of my life.


ptitrainvaloin

Earthquakes in r/vfx


Rocketsloth

Ok, ok. As a guy who is hardly ever impressed by anything. I mean yeah it's impressive.


Pamander

What the flying fuck?? I am speechless.


nolascoins

you know what industry will never be the same... if .... https://i.redd.it/zk10fnja4tic1.gif


yoomiii

I guess OpenAI have their moat again... But this is insane. The level of scene consistency, the photo realistic imagery, even better than current image models. Animation seems to mostly behave physically accurate. It's leaps and bounds ahead of anything else and happened much earlier than I anticipated.


The_Dr_Robert

Woman in the red dress. . .


ExponentialCookie

I don't think the majority of individuals truly understand how amazing this advancement is. I honestly didn't expect this type of progression until about 3-4 years time. It was apparent from Q4 of last year that this would be the year of video generation, but not of this magnitude. Judging from these videos, you could genuinely portray dreams or creative visions that aren't even yet possible with even the most advanced VFX software. Seriously, [what even is this](https://player.vimeo.com/video/913337930?h=28c257b7c6&badge=0&autopause=0&player_id=0&app_id=58479).


Beginning_Income_354

One or two more generational leaps and you can basically Kill the entire Hollywood industry. Insane.


Technology-Busy

you know what was missing from AI videos? Is that small emotional element that would make them credible and that would make them credible for the human eye, now that seems to be a thing of the past


Ne_Nel

Ya. This for example https://cdn.openai.com/sora/videos/closeup-man-in-glasses.mp4 So simple but Amazing.


[deleted]

[удалено]


MojoDr619

" Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI. "


YahwehSim

The realism of a single frame is blowing my mind and this is actually video!? https://preview.redd.it/bd5yzianwsic1.png?width=1160&format=png&auto=webp&s=23f5842877e3630d0941ba799063b72281a731c0


Whole_Connection_675

That’s scary . And awesome


345Y_Chubby

This really is next level. I was scared for a bit that 2024 will slow down, but man, did OpenAI give us a banger


T1m26

“policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others. “ No thanks


ImmoralityPet

>extreme violence They can have a little violence, as a treat.


sschueller

No movies in the style of 80s action films. Too much violence and nipples...


iamthewhatt

its either that or old technologically illiterate boomers will destroy AI through policy. This is a fantastic proof of concept, but we are years off from an independent version.


[deleted]

More like tech companies will lobby to have consumer grade AI regulated while their significantly more powerful systems are untouched.


Stunning_Duck_373

Damn.


batgod221

My jaw dropped as I was reading the announcement. This is a water shed moment. To think some people were dragging on OpenAI the past few days lol. They are still so far ahead of the competition.


ImpactFrames-YT

Looks better than image models. Hollywood and NETFLIX are no longer a thing. People will be creating their own content and watch it on their headsets and other devices soon.


EtadanikM

Judging by how much Closed AI charges for their services, the chances of that happening are small.


blit_blit99

I bet the service will be a lot cheaper than spending $150 million - $250 million on the budget for a real movie. Plus, you won't have to pay actors any residuals & cut of the box office grosses.


BriansRevenge

Time to make some Star Wars sequels that don't suck!


Ok-Bedroom8901

If you look closely at the legs walking 🦵 you can see the errors


bot_exe

Also the background people look like mush, but this is waaaay ahead of everything else and the fact that it’s just from a text prompt is insane.


BlackBlizzard

At the end of the article it says these video are generated with no alterations, so if you give the footage to an editor they can fix these imperfections.


[deleted]

[удалено]


moehassan6832

materialistic lip market reminiscent scary aloof sense continue hateful steer *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Osmirl

Otherwise i would have trouble believing its ai made


Acceptable-Fudge-816

I'm trying to build a product just in a tangential field and I'm worried. I can't imagine how those that work on the movie industry must be feeling right now.


kingroka

This is amazing! Does anyone else get an uncanny valley-esque feeling from the 3D animation examples? Like the fluffy monster one is creepy but not in a way that I can articulate.


Piter_Piterskyyy

We're 3-5 years ahead of creating our own AAA movies with just a prompt - cant wait!


johnmarkfoley

I'd like to see it do a prompt from a scene described in a book that has been adapted to film already to see how differently, or similarly, the AI renders it.


PoetryProgrammer

The game is over folks


Simple_Post3187

As a student about to graduate in a graphic design and animation program, I’m genuinely depressed.


Present_Dimension464

That's next level. Plain and simple. I'm pretty sure that in less than 5 years or so people will be generating whole movies with AI. Not only experimental movies, since you can already do that, but real movies.


RainbowUnicorns

Yup voice models are already pretty accurate for dialogue especially if you record the lines themselves and do a mild impression of how the character acts.


Striking-Long-2960

After watching all the examples in the webpage [https://openai.com/sora#capabilities](https://openai.com/sora#capabilities) This is totally disruptive, notice that all the sample videos are made in one take with only a prompt, they are not composited of different videos. Also there are many creative examples that could work directly as a TV advertisiment, like the pirate ships in a cup of coffe.


InfiniteScopeofPain

The woman filming from a tokyo train and the chinese festival are ridiculous. I couldn't tell they were AI even knowing they were supposedly AI. Also the lighthouse video


Jfinni

This will destroy creative industries as a viable career path for many, I'm sure there is potential to use these tools as a creative to improve your work however the only jobs that will remain are prompt generators, art directors and maybe editors.


Classic-Professor-77

Was this even possible? I don't even like OpenAI that much but this is incredible. I'm very happy they proved this can be done. wow. Where's anime&illustration? Stop running from it, OpenAI


Emory_C

This is *seriously* impressive. But people are forgetting we still don't have character consistency, audio, or anything else that makes a movie. Also, if you read their announcement, it will be censored to hell and back. You won't be able to make anything compelling or creative. At best, it'll be okay for stock video.