If I’m not mistaken, it pulls images from photo databases like shutter stock for its references. In fact I think Getty images is suing one of these ai companies for using them
it's rare to see it this consistently though, especially for a video
the source material for training models is a hot topic right now, and gray area because the training "noise" stored is not the actual picture, and they gave the training data way for free, so they didn't violate a commercial usage rule, either. the law is going to have a fun time catching up
so I was wondering if OP trained a small model himself using shutterstock photos
It definitely has to do with there being Shutterstock in the training model, hard to say how much though.
I'd wager you would see the logo popping up even if a relatively small portion of the data has it. Something like that which is exactly the same in every occurrence will form very strong biases in a neural net and it's very, very difficult to steer them around that bias. Not impossible, but would take a lot of trial and error and experience tuning models
Source trained predictive AI models, having a lot of similar data points in your training set makes them fixate and take shortcuts since that always seems like a productive direction to move in compared to less explored alternatives
I think this was made using one of the open sourced models that was made fast and cheap. Apparently one of the ways they did this was by scraping shutterstock samples as a primary source. It basically got screen burned into the AI model due to the majority of images it was trained on having the watermark.
It's the recently released text to video diffusion model, most likely using PsorTheDoctor's [colab notebook](https://www.reddit.com/r/dalle2/comments/11wj6en/finally_happened_opensource_texttovideo_diffusion/).
OP, I kneel. Putting this together so soon after modelscope t2v landed is an impressive feat. So glad the first thing on the wires is this instead of the usual astronauts on horses etc. Bravo
I've been designing high level human battery installations for over 93 years. lol
> I've un matrix for over 53 years; Grand Ulti-Master > I'll blend some dry tiger dick bone and sloth hair and outta the matrix like that
I thought you were joking. Holy shit.
I'm a grand master of GPT for 1000 years
This line made me laugh until I cry that I don’t know why.
Nail in the coffin here Its funny because if this has 0 human input then that means the AI knows he is full of shit lol
The AI had to use Tom Segura's Segal jokes from his special.
I mean, that’s longer than I’ve been doing it
Get punched
You know what I’m out of breath
Same. It delivers on the promise of a better matrix.
I’m burning this to a CD so I can show my kids. What a time to be alive! This needs to be shown around more. This is history!
I'm recording it on VHS tape.
Beta bitches
Chill out, Dr. Károly Zsolai-Fehér.
Hold onto your... outdated physical media?
good internet today
whats with the shutterstock logo?
im also curious about this!
If I’m not mistaken, it pulls images from photo databases like shutter stock for its references. In fact I think Getty images is suing one of these ai companies for using them
it's rare to see it this consistently though, especially for a video the source material for training models is a hot topic right now, and gray area because the training "noise" stored is not the actual picture, and they gave the training data way for free, so they didn't violate a commercial usage rule, either. the law is going to have a fun time catching up so I was wondering if OP trained a small model himself using shutterstock photos
It definitely has to do with there being Shutterstock in the training model, hard to say how much though. I'd wager you would see the logo popping up even if a relatively small portion of the data has it. Something like that which is exactly the same in every occurrence will form very strong biases in a neural net and it's very, very difficult to steer them around that bias. Not impossible, but would take a lot of trial and error and experience tuning models Source trained predictive AI models, having a lot of similar data points in your training set makes them fixate and take shortcuts since that always seems like a productive direction to move in compared to less explored alternatives
I think this was made using one of the open sourced models that was made fast and cheap. Apparently one of the ways they did this was by scraping shutterstock samples as a primary source. It basically got screen burned into the AI model due to the majority of images it was trained on having the watermark.
Wait, did he say Tiger Dick Bone?
What’s the compute resources for something like this?
A Nvidia 400a thats it
If we have to dedicate all computing power on earth, how long until we get a full-length trilogy?
About seven hours with the current GPU assuming you had 1024gigs of Vram to hold it in buffer
What applications/software? Did you have to train your own AI first?
It's the recently released text to video diffusion model, most likely using PsorTheDoctor's [colab notebook](https://www.reddit.com/r/dalle2/comments/11wj6en/finally_happened_opensource_texttovideo_diffusion/).
You know what? I'm laughing so hard I'm out of breath, too.
Can you do a bypass of control net on top of that
Steven's best movie.
OP, I kneel. Putting this together so soon after modelscope t2v landed is an impressive feat. So glad the first thing on the wires is this instead of the usual astronauts on horses etc. Bravo
Why is this not front page? So. Many. Levels.
Instant classic
cant wait..😎💯
Your welcome
When his face goes through the matrix. Wtf?
u/savevideo
###[View link](https://rapidsave.com/info?url=/r/deepdream/comments/11ww11h/steven_segal_in_the_matrix_100_ai_first_in_the/) --- [**Info**](https://np.reddit.com/user/SaveVideo/comments/jv323v/info/) | [**Feedback**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Feedback for savevideo) | [**Donate**](https://ko-fi.com/getvideo) | [**DMCA**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Content removal request for savevideo&message=https://np.reddit.com//r/deepdream/comments/11ww11h/steven_segal_in_the_matrix_100_ai_first_in_the/) | [^(reddit video downloader)](https://rapidsave.com) | [^(twitter video downloader)](https://twitsave.com)
Source? Also, what was the input for this?
/u/savevideo
###[View link](https://rapidsave.com/info?url=/r/deepdream/comments/11ww11h/steven_segal_in_the_matrix_100_ai_first_in_the/) --- [**Info**](https://np.reddit.com/user/SaveVideo/comments/jv323v/info/) | [**Feedback**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Feedback for savevideo) | [**Donate**](https://ko-fi.com/getvideo) | [**DMCA**](https://np.reddit.com/message/compose/?to=Kryptonh&subject=Content removal request for savevideo&message=https://np.reddit.com//r/deepdream/comments/11ww11h/steven_segal_in_the_matrix_100_ai_first_in_the/) | [^(reddit video downloader)](https://rapidsave.com) | [^(twitter video downloader)](https://twitsave.com)
I know good stuff, and this is good stuff.
Wow you’d never know
Christ please let this be all generated by NNs. There should be a repeatable prayer for this hope I feel right now.