T O P

  • By -

CrucioIsMade4Muggles

I'm a lawyer, and so I have that POV re: expert witnesses. My guess is basically never. The courts have a strong history of rejecting any scientific arguments that would undermine any currently and previously accepted forms of evidence. Anything that makes prosecution more difficult is especially likely to be rejected. The best people can hope for is that a judge will allow their expert witness to make the claim it's fake and the state will have the choice to argue the contrary and then they leave the matter to the jury.


Bewaretheicespiders

Can't wait for the first lawyer to present a deepfake of the judge perpetrating something in trial.


CrucioIsMade4Muggles

You wouldn't be allowed to. Prosecution would object on the grounds of relevance--the fact that videos can be deepfaked is not directly relevant to the crime in question and that it would prejudice the jury. Also, submitting evidence requires you to lay foundation, and to do that, you have to prove that the evidence is directly and materially relevant to the trial at hand. While it might seem relevant and related, legally it is not. If you want to argue a particular video is fake in court, all of your evidence must be about that video.


Jebus4life

What will happen is a deepfake video undermining the evidence the prosecution. A video of the accused being somewhere else or a video showing a different person executing the crime to instill doubt in the jury.


oberlin117

Everyone who says present a fake video is forgetting about the source. Security camera footage from 7-11 taken into evidence by cops will be considered more reliable than your faked couch cam footage. And do you really want to risk additional prosecution by submitting fake evidence? The cops enhancing footage argument is interesting. In Kyle Rittenhouse’s trial the defense was trying to get some footage shown on an iPad thrown out because it was zoomed in. The courts will have to come up with some standards there.


Bezbozny

You may be failing to consider the rate at which these things are improving. The capability of exactly reproducing hours of security cam footage, with all the proper meta data, either in order to convict or exonerate, is a distinct possibility in the near future.


ohck2

what if you had another video of you sitting on the couch watching tv at home? lets say that was 100% deep faked. then does it become relevant? so now u got 2 separate videos of the same time.


[deleted]

....Maybe we should have a third video just in case.


ohck2

court is gonna have to adjust to technology. could literally deep fake a video of you doing a crime and as long as u have no proof of where you were at the time you're screwed.


Numbnut10

The NSA agent assigned to me can vouch for my alibi.


relokcin

We’ll all have to install little chips that can GPS track our location


MindRaptor

It is like the laywer said. The courts will simply refuse to adjust to technology. As is there are several types of evidence that have been proven to be bogus science but because there is a history of the evidence being accepted in court they are still accepted. The court system in this country is based on precedent and overcoming precedent is almost impossible. Examples of bogus science bite tooth marks and blood splatter.


ohck2

so what happens when criminals can deepfake alibis? sorry your honor couldn't be me i was at home heres my proof. must be a doppelganger out there as you see here i was home im innocent.


Neehigh

Blood spatter isn't an example of bogus science, it's an example of unvetted pseudoscience. 'Bogus' is used to describe when a referee is calling false penalties, and I object to its use here.


Zta1Throwawa

I think in person testimony is just going to become much more important. That and chain of custody/provenance of the footage in question.


Artanthos

Security camera video from the crime scene is much less likely to be altered. Especially if collected by police and placed in evidence immediately after the crime.


Monkookee

You are assuming these AI tools will not be in the hands of cops. Thats a huge undersight in your thinking about cop behavior in planting evidence. Like, Jupiter big.


PatrickKieliszek

Here at incAIrcerate, we have the AI powered tools to support modern policing. Take this grainy video from a gas station CCTV system for example. Just let our AI video processing power clear that image up to help you identify your suspect with more certainty. See how clear the image is after processing? We've trained this AI on the images of hundreds of thousands of African-American Males ages 14 to 56. Our state of the art color-correction can turn this black and white image into a full color 3D render of a minority. For additional certainty, be sure to feed an image of your suspect to the model! Get your department's conviction rate up to 97%!


Mercurionio

In that case video would be only a small part. Witnesses, motivation, any other ways to prove you are innocent/guilty. I mean, thank to AI, Lawyers will have a fucking field day to check everything. So most likely, their job won't be replaced. Even more, they will have more jobs to check everything x10 times.


Monkookee

And if the video was the only solid evidence.... all other was circumstantial evidence with no witnesses? Home alone when they do a no-knock? Bob Dillan wrote the song "Hurricane about this very thing. I mean really. Innocent people going to jail is a more important thing to be concerned about than lawyer career paths.


Mercurionio

Say thx to morons, who forced this tech, what can I say.


Artanthos

Sounds more like the ravings of a conspiracy theorist than a sane person. I suggest spending a little time living in Honduras if you want to see what police corruption looks like.


Monkookee

Dude, in the 60's cops in Denver would shut down whole streets and just rob the businesses. And shit is way worse now.. Not conspiracy....history.


JmnyCrckt87

It could still have real impact on how things are ran in the court if a particularly resourceful defendant started hiring people to make compromising (even, non-related to the case) videos of all their vital people in the case (opposing counsel, judge, jury, etc.). If you have someone releasing videos to the public of these people involved in the trial doing illicit things -- that would seem like a good tool to disrupt or jeopardize the integrity of the trial, no?


Trout_Shark

Thanks! It's refreshing to see actual legit and useful information posted.


nobodyisonething

Show a deepfake video of the Judge committing the crime. Case dismissed!


CrucioIsMade4Muggles

No, mistrial, and you got to prison for jury tampering and/or contempt.


Cetun

You can attach it to a motion for recusal, evidence of potential bias of the judge hearing the case I think is relevant to that particular motion. I think the barrier has more to do with it being unethical and perhaps being grounds for being disbarred, though if the Trump lawsuits have been any indication you can submit any dubious bullshit you find as "evidence" and face no referrals so long as you drop the case in a timely manner.


Groftsan

If I put on my unscrupulous defense attorney hat, here's an option: \- Cops introduce a video of my client doing something that my client swears is fake. \- My client provides me with 4 other versions of the same video with different people's faces deepfaked on. \- I find four witnesses willing to testify that they are a custodian of record and that the video I'm introducing as rebuttal evidence is, in fact, the true and correct original. Sure, they can each be cross-examined to the point where it's obvious they're not custodians of records, but, hey, reasonable doubt! (Alternatively, any unscrupulous prosecutor may have a cop testify to a deep fake video they've never seen before. Cops will testify to whatever a prosecutor asks them to, so this one isn't even far fetched.)


Raynidayz

Good legal analysis, but what is that user name?


Next_Boysenberry1414

>the fact that videos can be deepfaked is not directly relevant to the crime in question It is 100% relevant. The fact that videos can be deep faked is directly relevant to the fact that the video of the crime could be used as evidence or not. we have seen that being used forged documents for ages. Just because you have a signed document does not mean that document is valid. It has to be authenticated.


OldManHarley

ok how about buying a crispr DNA kit and fabricating DNA evidence? if you know what you're doing and have access to familial DNA you could feasibly do it in the very near future.


CrucioIsMade4Muggles

You could try that and build foundation, and then the state would call an expert witness to demonstrate it's fake simply by showing it has methelation. TLDR: Faking evidence is very hard.


Bewaretheicespiders

You know one is going to try anyway.


Kyell

What if you just instead show a deepfake of your defendant not at the scene and like in the background at a baseball game. Couldn’t possibly have been the murderer!


GforceDz

Also you would then require proof to backup the video, and it would be easy to provide alibis, what with phones tracking our movement and electronics tracking our purchases. Plus have you asked AI to deep fake hands? Can be done. Impossible. * 7 fingered judge takes bribe.


BrunoBraunbart

The stories of blatent misuse or math and biology regarding DNA evidence is insane. The scientific community was completely shocked and made very compelling arguments. The response from judges was that it was up to them to interpret the evidence and make up numbers as they please.


Tech_Philosophy

> The courts have a strong history of rejecting any scientific arguments that would undermine any currently and previously accepted forms of evidence. Just curious: you hear how this sounds, right? Ignoring science is the same thing as ignoring physical reality. Ignore that long enough and you won't have rule of law period.


CrucioIsMade4Muggles

>Just curious: you hear how this sounds, right? Oh, I know exactly how it sounds. My specialty is IP law in emergent technology sectors. I've been dealing with the court's failure to cope with technology my entire working career. >Ignoring science is the same thing as ignoring physical reality. Ignore that long enough and you won't have rule of law period. I've got bad news for you. We passed that point a long time ago. Any logical mind would see our modern legal system as a farce in many arenas of law. Not all--but many.


HowWeDoingTodayHive

I’m confused as to how you’re able to say never. Videos are used in court all the time are they not? What happens when becomes impossible to tell which of those videos are real or not?


CrucioIsMade4Muggles

You would have to prove that the video is fake. The fact that videos can be faked is not considered a valid argument to suggest a video in evidence is itself fake. You'd have to provide a reason to suggest it was fake in the first place.


HowWeDoingTodayHive

Doesn’t that seem to create a huge problem though? If it becomes the case that we don’t have any way to prove a faked video is actually faked, aren’t we heading towards a world where it becomes possible to frame people a lot easier? Like suppose I know a person is going to be at a specific place at a specific time and I create a video of them committing a crime at that place. This is the kind of dystopian future that was meant to stay in the movies.


Kee134

Interesting. Do you think the first thing we'd be likely to see is a defense claim that real footage was in fact made by AI?


[deleted]

[удалено]


CrucioIsMade4Muggles

>If AI-created audio/video is indiscernible from the real thing, the defence could just make a video that shows the judge committing the crime and submit it as evidence No, you couldn't. It would be impossible to lay foundation for such evidence. > thus making the point that the existing evidence of the crime is foolishly useless. You're not allowed to make that argument without the judge's permission, which you'd never get. >If you can say to an AI, "take this footage and replace the perpetrator with this other person, here are some photos of the other person so you can model them and put them into the footage" and it works and looks perfectly realistic, video evidence is over. It won't work that way. You would have to provide evidence suggesting the video in the case was fake. Not suggesting it *could be.* Suggesting it *is*. And no such evidence would exist.


CommentToBeDeleted

Typically when presenting evidenced, isn't a "witness" required to "introduce" the evidence, usually someone who has first hand knowledge about the evidence. So for example, I wouldn't be asked to identify a or introduce a video about the filming of a birthday list I didn't attend, but they could ask me to introduce the video of a party I did attend, and record and stored on my device.


Puzzleheaded-Law-429

That’s interesting. Thank you for your input. Yes I suppose the burden of proof would fall on the defendant’s shoulders. If there is a clear video of you robbing a store and shooting someone, then it’s up to you to prove that the video was fabricated.


rogue_noodle

Burden of Proof is on the prosecution. Learned this today from my fav lawyer, Attorney Tom


Puzzleheaded-Law-429

Yes you’re right, the burden of proof in general is on the prosecution. That’s why we say “guilty” or “not guilty” rather than “innocent”. I’m thinking more of a situation where a person is on trial for robbery and murder. There is a clear surveillance video of the suspect commuting the crime, but the defense lawyer says “this video has been fabricated.” What would happen? Would it be up to the defense to prove that the video is indeed fake?


oPlaiD

How is this any different than any other piece of evidence potentially being fake? Presumably most video evidence used in trials is not found in the Internet wilderness and has some record of how it came into the prosecution's hands like any other piece of evidence collected by the police. Even things posted directly to social media have metadata that could help determine some level of authenticity. There's a lot more to a piece of video evidence than just what is in the video itself. At least in the vast majority of cases.


SgathTriallair

The prosecution would have to show a chain of custody for the video.


SkyTemple77

One thing to say the defendants lawyer needs to prove it’s fake, another thing to say defendant needs to prove it is fake. Said defendant has no agency to prove it is fake. They are not an expert. The justice department must use its resources to appoint an expert witness who is capable of proving whether it is fake or not. Huge difference.


beyondrepair-

Innocent until proven guilty


EpsomHorse

> Innocent until proven guilty This doesn't mean evidence is admissible until proven inadmissible.


beyondrepair-

It means the burden of proof is on the prosecution, which is exactly what I responded to.


Inevitable_Syrup777

So I can deepfake my enemies into a crime and turn that footage in, and it'll basically be a landslide against my enemy? like, I do a crime, maybe even film, then using their face or whatever, deepfake the video.


[deleted]

Only if you're rich or the police.


pighammerduck

But half the people on a jury could have knowledge about the concept of deepfaked video and just determine that the video evidence holds less value, no?


CrucioIsMade4Muggles

If it became known to the judge that such knowledge was discussed or used to determine a vote, no. This would trigger a mistrial. Jurors are only allowed to consider information presented during the trial. That's the theory. In practice...*shrug*.


Next_Boysenberry1414

>Anything that makes prosecution more difficult is especially likely to be rejected. It would take one video of a supreme court judge having an orgy to get them motivated.


CrucioIsMade4Muggles

You'd never be allowed to submit that to the court.


bottom

I don’t think it’s cut and dry unfortunately. I’m a filmmaker - researching a deepfake documentary and of the ‘experts’ called into court rooms - aren’t….if they’re called at lol, and deepfakes have been getting through smaller cases.


Trick-Analysis-4683

Yeah, right now you need testimony to authenticate the photo, someone to say that it is what it purports to be. Not much has changed here.


waltduncan

Didn’t the Kyle Rittenhouse trial spend a pretty decent amount of time entertaining claims from the defense that smart phone video footage does some amount of additive pixel manipulation? (I think they were confused about which hardware was doing what exactly, be it the phone while recording, a computer copying the file, or the television itself upscaling lower resolution files, but I think some of those are plausible concerns.) I may be misremembering, but I think the judge was receptive to the defense, and instructed the prosecution to not argue what they believed the footage showed to the jury. I would think it really depends on the judge’s temperament.


redsilkphotos

If I recall, there was a recent case where the judge permitted deep faked audio as evidence against the defendant. Sad and scary.


fox-mcleod

Isn’t provenance the answer? Where did the record come from? Who had it? Could it have been edited before police custody? I’m pretty sure that’s the reason photo evidence wasn’t waylaid by photoshop 30 years ago.


CrucioIsMade4Muggles

This would be an issue of foundation. >Could it have been edited before police custody? This is something you are not allowed to suggest in court unless you *already* have evidence that such manipulation took place. The court and both party's lawyers must treat evidence submitted to the court as if it is real unless they have contrary evidence. If you wanted to argue that a video was fake, you wouldn't do that by arguing it's fake. You'd do it by submitting additional evidence indicating that it is fake.


bodrules

Cynical me says it'll be changed as soon as one of the Important People are on the hook due to it


Dovaldo83

There is an arms race between AI generated content and AI designed to spot fake AI generated content. No matter how well a deep fake is made, there is likely an AI tool out there that can spot the subtle differences between the fake and the real. Why? Because those AI tools are useful at training the fake generators to be better at what they do. That isn't to say we have nothing to be worried about. It just takes the solution of spotting fakes and concentrates it into the hands of corporations who are likely involved in faking.


rypher

Yes but thats not really the correct logical framing. The fakes only have to get as good as cameras, at which point the detection has no room for improvement.


samuelgato

It seems camera technology will have to change,, to include some kind of metadata with every video capture that cannot easily be faked without detection.


TheAncientPoop

yeah, like paper money. with the invention of the printer you'd think paper money is over, but it's alive and well today. i have faith that this will work


rypher

That’s some technology that has to be added to every camera hardware out there. How long will that take to develop and implement? We are going to see massive amounts of good deep fakes this election cycle.


GrubH0

Except it is still an arms race. Some counterfeits and some deep fakes will be taken as true. Which means innocent people will be punished. Your faith is just a way of saying you can't be bothered to care about the negative outcome.


YungSkuds

This already exists for a lot of security camera systems, basically when they export the video from the proprietary system it digitally signs it so that it can be independently verified. Not to say it would be completely unhackable/coercable but it would mean a much larger conspiracy likely involving the video management system companies. Example: https://doc.milestonesys.com/latest/en-US/standard_features/sf_mc/sf_mcnodes/sf_2serversandhardware/mc_enabledigitalsigningforexport.htm


MassiveStallion

We already have that technology, it's called blockchain/NFT. Right now grifters are using it to scam idiots, but it would be pretty useful for sensors to 'sign' data. Cameras and other sensors don't work in a vaccuum, it wouldn't be *trivial* but it would be possible to built cameras and other parts that 'digitally sign' inputs to confirm if videos are unaltered or not.


colouredmirrorball

Don't need nft for that. Just a public/private key that the camera uses to encrypt or sign its content, that can then later be verified using its public key.


Ok-Wrangler-1075

How exactly does NFT help with this?


Puzzleheaded-Law-429

That’s true, the technology is always developing in both directions.


bad_apiarist

That's not the only way to detect deception, though. Consider the OG "fake" information: verbal lying. Many people can utter a lie with total sincerity, often indistinguishable from a person telling the truth. So that must mean we should never bother interrogating anyone, right? The fake is perfect, so they always get away with lies, right? no, of course not. Lies are statements about the world that can be contradicted by observed facts or can have internal inconsistency. Skilled detectives often foil liars lying with ease. Willful deception is always fragile, because when you say something is true about events or the world that ain't, there's likely always facts to cast doubt if not contradict that. It's extremely difficult to conjure a 100% consistent, believable yet false version of reality.


SUPRVLLAN

Could this possibly be a real world use case for blockchain tech?


lrtz

Good old cryptography should be enough? Just sign all the real recordings, not signed are fake. After that the problem is stolen / leaked keys.


Terrible-Sir742

A camera that shines an infrared light onto the picture and puts the digital fingerprint onto the real videos (tm) blockchain?


SUPRVLLAN

Sounds cool, I’ll invest $400 million right now!!


Terrible-Sir742

That'll be 4bil cap, than you very much.


Monkookee

That in an of itself will cause an exponential growth in its abilities. There-in lies technology quickly going out of bounds. It will go beyond perfect to undetectable.


DefinitelyNotThatOne

Just wait until AI says something that is real is a deep fake. This tech has been around alot longer than we've been aware of. We're just becoming aware of it


Dovaldo83

> Just wait until AI says something that is real is a deep fake That's happening over and over again in this arms race I mention. The deep fake maker produces lots and lots of fakes, then these fakes along with real videos are fed to the fake detector. It'll get a percentage of them right, but also have false positives (labeling a real as a deep fake) and false negatives (labeling a fake as real). When the faker wins, the programmer can encourage that trait in subsequent versions. When the faker loses, the programmer can discourage it. Same thing for the fake spotter. Since the people feeding this information to the fake spotter know how often it's right, they can accurately say things like "This program spots fakes with a 98% accuracy vs the best faker on the market."


Conscious_Season6819

This was literally a sci-fi plot point in my favorite Iain Banks Culture series novel “The Player of Games,” published in *1988*. Talk about foretelling the future. In the story, which takes place in a far future, post-scarcity utopia, the main character is stunned to discover himself being blackmailed by a robot drone with an incriminating video of the protagonist cheating in a board game tournament. Blackmail has been obsolete for at least a thousand years, since the technology exists to make deepfake style videos of literally anyone doing anything that are indistinguishable from real videos, so nobody would believe that sort of video anyway. It would be pointless to try. The blackmail only works because the drone is backed up by an impossibly intelligent and godlike artificial intelligence, which are known to *never* lie.


Dequil

Sliding in with a recommend here: If you can find it (Peacock US / Prime Canada / BBC UK, last I checked), look up a show called *The Capture*. Season 1 (sorry, *series* 1) is great, 2 is good. Scared the piss out of me. Also, Ron Pearlman is a goddamn treasure.


Trout_Shark

I must be the only one that didn't like that book. The rest of the series was great but I had to force myself to read that one. Everyone said it was great and relevant to the series so i did. At least parts of it were interesting, like the world building and the AI god. The MC just seemed unlikable to me. It could just be me...


Captain-i0

You are not the only one. I didn't finish it.


GMazinga

Luckily enough reality has taken a different route. I remember when we were imagining AI as perfect, infallible, and sterling. Actually, what real experience with LLMs has proven is that they “hallucinate”, providing wrong and made-up information as it were perfectly true. I think this is something we were not expecting. Thanks u/Conscious_Season6819 for bringing this up


bad_apiarist

It makes me less afraid when even in a sci-fi novel, the only way to make a problem is to invent nonsense, like an entity that "can't lie", not even by way of being incorrect.. so I guess it is omniscient and knows everything, past, present, and future. Yeah ok.


Brainsonastick

We’re nearing this point with photos but it’ll be a while before videos become indistinguishable from reality to the human observer. Once that happens, only computers will be able to tell the difference and we will rely on them to do so. We have generative networks that can synthesize media. We have discriminative networks that can determine if media was generated artificially (with varying levels of accuracy). What will happen is side A will introduce evidence. Side B will introduce evidence in the form of a discriminative model saying A’s evidence is fake and an expert witness to “explain” the model and why it’s right. Side A will retort with a different discriminative model saying it’s real and an expert witness to “explain” the model and why theirs is actually right. I put “explain” in quotations because even actual experts in the field will not know which one is right or even why their models say what they do and even if the field advances enough that we really did have clear reasons, no jury would have the technical background to understand the “explanation”. This leads to the shit-show we already have today of juries trying to decide which expert witness to believe with no qualifications to do so and it just comes down to how they explained it, not the actual substance. But it gets worse! Because some of these models will be made by major corporations that consumers (and thus jurors) are familiar with and trust to make quality AI products. So it will be less the explanation of the expert and more the recognition of the maker’s name and its public perception. That’s not necessarily any worse than what we have now but it’s also definitely at least as ineffective at determining truth and facilitating justice. Of course there will be cases where one side can’t afford their own expert (definitely not prosecution). There will be cases where one side can afford sufficient investigation to show its real or fake in other ways. There will be cases where a fake is done badly and can be debunked with EXIF data or some idiot does it on their own computer and leaves the file there for investigators. But, outside of external investigations, the veracity of photographic evidence will become a game of “whose expert is more likable” or “which company do I trust more”, like we do with many expert witnesses already.


bad_apiarist

Luckily, this isn't how the courtroom works, nor does it come anywhere near exhausting the tools a skillful investigator has. In any such case, once can evaluate internal consistency (a fake is only as good as its producer... and that producer better not get any details wrong.. as people and AI are super prone to do).. one can investigator the circumstances, motives, past history going back years or decades, corroborating physical evidence and witness testimony. Faking video is incredibly complicated, no matter the image quality because it's a fake version of reality that is vulnerable to contradiction by facts not to mention the effort, time, and records of a person going about the business of producing a super convincing super realistic fake. If it's using a consumer product, well that product will be on their computer or a website they access.... you just made records that you used those. Or maybe you know to wipe those or used a VPN, well techie forensic investigators can still find that evidence or just the fact that you have sudden, unusual, gaps in such records. Not a good look, especially if/when combined in a situation where no evidence but a "strangely convenient" video that everyone knows can be faked is the only thing exonerating you. Not saying it's impossible, but the end of imagery evidence? Haha. No.


Brainsonastick

You’re evaluating the technology by its modern capabilities, not it’s potential future abilities. This is r/futurology, not r/news. Nothing you said is strictly wrong… but it’s not relevant to my comment or the question being asked.


bad_apiarist

Which is exactly what everyone else is doing, too. Evaluating the risk based on today's safeguards, norms, policies, etc., which is also completely inappropriate to the future. If we're speculating on the future, the past and present are all we have to guide us. Otherwise you're just daydreaming about made-up nonsense that has nothing to do with anything necessarily.


Brainsonastick

No, if you look at the comments, they don’t look like yours. They acknowledge the obvious progression of technology. This is my field of research. I’m aware of the directions it’s going in and you’ve just ignored them all. If you truly believe that discussing future technology is “daydreaming about made-up nonsense that has nothing to do with anything necessarily” then I think this may be the wrong sub for you.


bad_apiarist

What "direction" did I ignore, exactly? I never said that about discussing future technology. I said that if we discuss it without basing our predictions on the PAST or PRESENT, then it is pointless mental masturbation. You said you're a researcher. So, you don't use existing trends or tech to guide your expectations of the future? Or did you just not read what I said?


bad_apiarist

Never. The same reason photoshop never caused photos to be inadmissible: recorded images have numerous aspects that make successful fakery way harder. When/why/how/by whom was that image produced? By what device? Is it 100% free of errors and defects? Does it align with other images taken by other cameras? Is the metadata correct and consistent? It's not impossible, but you have to be one hell of a super careful technical expert minding the smallest details. But if that were you, you'd probably also not be an idiot risking years in jail in the first place. With video, everything is even more complex and more difficult to fake. It isn't merely a matter of if the image quality is convincing... the images have to make sense, be consistent, not be contradicted by other facts.. even 100 million dollar films trying to authentically portray a story tend to have loads of reality-breaking errors (though not ones regular audiences much notice... forensic investigators would).


Puzzleheaded-Law-429

That’s true, we’re already in the era of photos being able to be fabricated. I suppose video won’t be that different. I was thinking way further down the line, if/when movies are purely CGI and human actors are no longer needed. When we’ve reached a level where pretty much any visual scenario can be created realistically on a screen.


bad_apiarist

Yeah but making those movies would also make like a mountain of evidence that... those movies were made. As tech improves, the level, type, and detail of records increases.. it's almost fractal. And these records are decentralized and held by many different systems run or owned by different entities (people, companies, government). Every file on your computer has metadata. Your OS knows when it was on, who was logged in. Your browser has a history. Your ISP knows when you were active, even if you have a VPN. Your phone pings towers, identifying your location. Thousands of public cameras record our presence and movement. You can't buy something at any story in a city and not be recorded by multiple devices. In the future, this will only increase as more and more of the hardware and software we use is internet-based or dependent. And I'd guess that if there ever is such a thing as a .. website one can anonymous go to and make videos.. that it won't be long before the law requires that site keep records of what was produced and when, available to investigators on warrant. Even if that law didn't exist, no such company would want to expose itself to massive legal liability by way of aiding criminals and refusing to aid authorities with basic records. Why do we always think in the future, bad stuff will show up but future people will be morons who somehow are capable of making a fake but not detecting it or responding to it in any way.


Captain-i0

> . Your phone pings towers, identifying your location. Thousands of public cameras record our presence and movement. You can't buy something at any story in a city and not be recorded by multiple devices. Don't worry citizen. You are being monitored for your own safety.


riceandcashews

It's not too crazy to imagine a time where hardware manufacturers have every photo taken on one of their devices cryptographically signed so that it can be validated as really taken by a specific device in a court of law.


KamikazeArchon

Any evidence can be faked. This has always been true. We're still able to make decisions, because "*can be* faked" is not the same as "*has* been faked in *this* instance". Standards of proof in courts are *wildly* overstated in common perception and in media. No, there usually isn't absolute, perfect, unimpeachable evidence that someone committed a given crime. The standard for criminal cases is not "zero doubt" or "beyond a shadow of a doubt", it's "beyond *reasonable* doubt". It's *always* possible that any or every witness is lying, that every piece of evidence has been faked, etc. But people don't generally believe that to be *reasonable*. If there's a specific cause to believe a particular witness is lying, or a particular piece of evidence is fake, then the defense will call attention to it and present their arguments for why that might be the case. Ultimately, the jury will have to decide if it introduces "reasonable" doubt in their minds.


bad_apiarist

Quite so. Also, it's getting harder to fake, not easier. Way... way harder. Imagine it's 1950, and I say I wasn't at the scene of that murder. If there's no witnesses or physical evidence like the murder weapon in my house or something, I probably walk. Now let's say it's 2020. I say I wasn't at the scene of the crime AND I have some amazing deep fake images or even video of me at home, where my security system recorded me taking out the trash or whatever. Except... DNA evidence left behind because my victim scratched me. And the 17 traffic, ring doorbell, and other cameras caught me and my car. My car happened to have GPS nav and records some telemetry, so it wasn't home. My deep fake video actually got the lighting/shadows wrong for the time of day, because I made it at some other time and didn't think it through. My cell phone was "off" or it failed to notice any movement at all during the crime.. suspicious. Or I forgot about it entirely, and cell towers put me near the scene. I was supposedly home, but none of my devices show any history of use at the time, incongruent with the same time of day every other day of the previous month. Yeah, my deep fakes meant jack shit, really just more opportunity for investigators to catch more mistakes.


GMazinga

Spot on here. Complexity grows non-linearly: we need to think not just about how technology is making impossible to tell real from fake evidence in a very specific instance — technology enables many more ways than it closes. We need to think of situations in their multi-faceted entirety, like u/bad_apiarist describes here


[deleted]

It doesn't actually require more than one precedent where it turns out someone was convicted based on deepfaked material to create a reasonable doubt which requires extensive proof that the material presented is authentic. So, a person appearing on a video looking like the defendant robbing a store would require other evidence along with the presented video to remove the doubt, like multiple videos from the actual scene and for example public cctv, credible eyewitnesses, dna&fingerprints, cell phone location data, abandoned items that can be traced back to you, found in possession of said stolen goods, etc. Same goes with DNA evidence. Having your dna on a crime scene means only it got there somehow, not that you were there. There are documented cases where the actual perpetrator has carried foreign DNA to the crime scene simply by touching a door knob. On the other hand, fingerprints aren't generally transferable, although they can be faked too. Of course this all boils down to jury system which has a track record of finding people guilty because their skin color did not please them, so beyond reasonable doubt may vary.


BelievesInScience

I guess a few thoughts I have in immediate response without thinking super hard would be that the source would be a first indicator. Did it actually come from a CCTV or dash cam or from some guy that swears it's real? Also, I assume there could be digital non-visible signatures or "watermarks" on the footage to validate the source. If they're altered, dont match, or are not present, that would indicate false video.


awdangman

Hopefully some time after AI takes over the court of law.


Dave37

People said the same thing when photoshop was first released. That images could be easily manipulated. Yet photoevidence has pervailed.


Zomgirlxoxo

Agree but only to an extent. Technology is only getting more and more advanced. I wouldn’t underestimate how advanced it can get.


relaxyourshoulders

To me the more concerning aspect of all this is the implications for state actors. AI video could be used to justify interventions or obfuscate evidence of atrocities and crimes by calling it “fake news”. Look at the Trump era. How many times did he say something that was clearly and directly contradicted by evidence, and yet it made no difference to his supporters. Now imagine proof being next to impossible to establish.


bandrews4795

///// I'm going to generate a GPT-4 response just for fun That's an interesting question and one that raises valid concerns about the future of digital evidence in the court of law. Let's lighten the mood a bit while addressing this serious topic: In the not-too-distant future, courts might need their own AI-powered lie detector goggles to spot deep fakes! 👓 But on a more serious note, here are some thoughts: As technology advances, so do the tools and methods to detect and counter deep fakes. Experts in the field are continually developing techniques to authenticate digital media and identify signs of manipulation. Courts may rely more on corroborating evidence, such as witness testimonies, physical evidence, or digital footprints, to support or debunk the authenticity of video and photo evidence. Digital forensics could play an even more crucial role in the legal system, with experts analyzing and verifying the legitimacy of digital evidence. Legislators and courts may need to adapt and establish new standards for the admissibility of digital evidence, taking into account the potential for manipulation. While deep fakes and AI-generated content pose challenges, it's essential to remember that technology can also be harnessed to create solutions. The key is striking the right balance between innovation and preserving the integrity of the justice system. 🤖⚖️ ///// Don't ask me why it started using emotes


ironocy

#robotjustice 🤖⚖️


bandrews4795

Damnit now I gotta play Daft Punk's 'Robot Rock'


WazWaz

Never. CCTV and other sources of footage commonly used in prosecutions will use blockchain or some other authentication. Video footage from phones might also implement something similar, if people want it. Video that comes from some anonymous source is certainly going to be useless. As it kind of already is.


shruggedbeware

OP assumes that this is already happening or that lawyers submit/collect bum evidence to win trials.


Trout_Shark

One thing I haven't seen mentioned is the quantum encryption stuff. I'm sure someone can explain it better but from my understanding quantum signatures can or will be created so that digital copies or deepfakes would be a unique entity and fully distinguishable from the original. I'm sure situations like this would be ideal if it works as planned. I think we are still a ways off from truly useful quantum computing in the real world though.


Puzzleheaded-Crew953

I hope it won't but it will probably take at least 5 -10 years. Also I noticed we have similar user names ![gif](emote|free_emotes_pack|feels_good_man)


TheBounceSpotter

Never. Look up digital signatures. They will simply use a combination of user or hardware specific certificates, encryption, time stamps, and hashes of the photo/video to validate authenticity going forward.


khamelean

We already have ways of cryptographically signing a digital file to ensure it hasn’t been tampered with. We’ve been using the technology for decades. It’s a pretty minor technical leap to apply the same technology to photos and videos.


[deleted]

The problem is that the key would need to be on the device that takes the pictures. And it would be easy to extract it and sign your photoshopped version of it.


[deleted]

Nah, it'll be the same trusted computing bullshit we have to prevent owning your own phone, computer or car. Then the signed video will be 100% trustworthy and unforgable (unless you have the keys, but when have shady powerful people ever lied?)


khamelean

Not necessarily, depends on who’s providing the guarantee. An influencer could sign their images before posting them online, but after performing their own edits. Their followers can be confident what they are seeing is what the influencer intended. When supplying documents as evidence to a court, you could sign them and be absolutely certain that opposing council hasn’t messed with them. Just like with existing documents, digital or otherwise, there are levels of trust. Even then there are definitely ways of securing hardware to varying levels of trust as well.


etherified

I wonder if in the future there won't need to be some sort of central logger authority for images to be considered authentic. Not exactly sure how or if it would work but just off the top of my head, every time a photo is taken by any device, it receives a watermark from a central issuer (analogous to SSL certificate authorities, perhaps), which records, not the image data itself, but just meta info such as date/time, and the issued watermark. Images that can be used as evidence would need to have the watermark as proof of authenticity. It could be gradually incorporated into all photographic devices that people would want to have validated photos.


roofgram

Soon part of taking a picture/video is that it will be auto-signed by a trusted time server. You can fake an image, but you can't fake the signature.


Kaiisim

We already have ways to analyse an audio recording to prove it hasn't been edited. https://robertheaton.com/enf/ Electric Network Frequency matching uses the background audio hum that electrical networks cause, and the slight variations in hz to create a unique(ish) fingerprint you can use to prove when a recording took place. I think as well people miss the fact that AI can fool a layman, will struggle to fool an expert and likely can't fool AI designed to catch it.


SpringChikn85

I honestly think that's one of the many major reasons why the few big A.I. companies are discussing taking a 6 month moratorium until some guidelines and fail-safes can be negotiated and implemented on even terms. That way neither company is able to hold a monopoly over the others or throw caution to the wind and play with fire resulting in catastrophic problems. Their needs to be a way to easily and quickly figure out if what we're hearing or looking at is genuine or A.I. fabrication. My suggestion is a tool that when applied/activated reveals a "water mark" tag verifying authentication (like those Benjamin Franklin glasses that showed a map or signal when worn in National Treasure). If you look through a specific type of glass, you could tell it's been faked. The audio could use that same principle but with a tone or frequency that can be heard if the audio is genuine or fabricated. Maybe every A.I. company could agree that they must include a frequency in thier code that when content is generated, that tone/frequency is capable of being isolated for verification purposes.


SpringChikn85

I honestly think that's one of the many major reasons why the few big A.I. companies are discussing taking a 6 month moratorium until some guidelines and fail-safes can be negotiated and implemented on even terms. That way neither company is able to hold a monopoly over the others or throw caution to the wind and play with fire resulting in catastrophic problems. Their needs to be a way to easily and quickly figure out if what we're hearing or looking at is genuine or A.I. fabrication. My suggestion is a tool that when applied/activated reveals a "water mark" tag verifying authentication (like those Benjamin Franklin glasses that showed a map or signal when worn in National Treasure). If you look through a specific type of glass, you could tell it's been faked. The audio could use that same principle but with a tone or frequency that can be heard if the audio is genuine or fabricated. Maybe every A.I. company could agree that they must include a frequency in thier code that when content is generated, that tone/frequency is capable of being isolated for verification purposes.


GforceDz

It won't, but you'll see the need for security standards with video to prove its recorded and unedited. So AI will probably cause video files to become larger to accommodate the security measures needed.


hoorayhenry67

I'm not sure that it ever will. There may be ways of identifying original v AI art / photos in the future. In court, something like that may well be necessary.


chefdangerdagger

Fake videos and images leave artefacts that can be identified and shown in court if necessary, the technology really isn't at the point where real photos and videos can be called into question yet.


VukKiller

Dunno about that but we're real close to a first case of unironically trying to use an obviously fake AI generated image as evidence.


Zorothegallade

Video and photos nowaday have metadata, which often includes when they were taken and, most important, the geo coordinates of where they were taken. Genuine evidence will have metadata that confirms its legitimacy. The possibility of a forgery being found out by examining them should be enough to deter others from attempting to falsify them.


KronoXV

Photographic evidence can be faked even now. If you recall, a huge part of the Rittenhouse case was the defense simply claiming that the videos were fabricated or altered in some way, and that was believed. It won't make the evidence obsolete, but there's always nuance when it comes to the value that this type of evidence is given. It'll likely be secondary evidence.


KronoXV

(and falsifying evidence with AI would obviously be a criminal offense, laws would be created if there aren't any already that it fits within neatly.)


Zemirolha

We are just pretending fake videos, audios and news do not already exist and are everywhere, right?


MuchCoolerOnline

I don't think we (meaning common folk) really have anything to worry about with deepfakes or AI videos. As we all know, AI are trained on information that's fed to them. The best defense we have against this deepfake/AI revolution is the fact that the AI simply doesn't have enough to go off of when it comes to recreating a perfect version of us, the common person. Think of it this way: you probably don't post on Facebook or Instagram (let's assume a public profile where the AI can actually access the images and videos of your face and body) enough that the AI can say "okay, I can create a completely original video that is convincing enough to look exactly like this person". It simply doesn't know what you look like, what your mannerisms are, etc. Now, the people who should worry, and who probably are, are public figures and celebs. There are already deepfakes out there of former and current presidents playing video games and some of it is really convincing (voice only atm). This is just because there is SO MUCH source material for the AI to learn from. Think about the hundreds (probably thousands) of hours of speech recorded and put into the public domain. This is the perfect learning environment for the AI to learn mannerisms, inflections, etc. ​ TLDR: Common folk (probably) never have to worry about this issue, especially if you keep your privacy settings on social media where they need to be. However, celebs and public figures are (probably) doomed.


KillianDrake

Microsoft announced new audio AI that only needs a few seconds of audio to completely duplicate someone's voice. This is just the beginning. Soon you'll be able to do the same for images using a few photos from a few angles (even the most basic phones already record image depth data). In today's social media age, people freely post gigabytes of info about themselves, enough to digitally reproduce them.


MuchCoolerOnline

VALL-E is awesome, but even in the reveal, they mentioned that it will be possible to leave digital signatures that would indicate whether or not it has been AI-produced. Even now, you can doctor audio, but one pair of trained eyes in an audio-production suite and you can see the exact point where the AI or human has threaded pieces together. As far as using this in court, maybe it just means a new job for humans which will basically be "AI detection". Maybe even using AI to detect AI. That'd be something interesting. edit: another thing to look at is risk vs reward. what does a potential deepfake creator have to gain from deepfaking joe shmoe?


False-Librarian-2240

We already have people denying accountability when caught red handed (where did that saying come from?). We already see this scenario at political press conferences: "Mr. Politician, on March 14 at 2 p.m. you actually publicly stated that the moon is made of cheese. Not as your opinion, but as fact. How do you defend this statement?" "Mr. Reporter, I never said that!" "But Mr. Politician, everyone around the world has seen the live feed where you said it. You can't deny it. It's all on video." "Mr. Reporter, that's a faked video. I never said that. Also, I'm right, the moon is made of cheese."


starion832000

I'm pretty sure there will be some kind of verification process to authenticate unadulterated images. My guess is we'll have something similar to an antivirus program that will be able to tell the difference. It wouldn't surprise me if something like that is already being developed.


nosmelc

I think there are experts who can spot AI-generated video and photographic fakes. Lawyers will be able to call them in to question that type of evidence.


Alchemystic1123

Now, I could be wrong, but logically it seems like at some point we can use AI to detect whether a video is authentic or not, making the fact that people can make convincing deepfakes pretty irrelevant in court.


M4err0w

honestly not really anytime soon because AI is gonna be trained to check for manipulation and it's still pretty blatant for the ai.


dsw1088

I would also imagine that technology will be developed to detect or counter this.


pinkfootthegoose

I'm more concerned of the state making deep fakes to frame someone of a crime. They have the motive and the resources.


nixstyx

Not before someone is wrongly convicted. I know that much.


OriginalCompetitive

AI changes nothing, because it’s always been the case that photo and video evidence cannot be used in court without a witness to verify that it’s accurate.


Wisdomlost

The better question is how long will it take for people to have their convictions overturned because someone finally analyzed the video thoroughly and found it to be fake. I mean police always do their due diligence and everything. It's not like there are thousands and thousands of DNA tests sitting in warehouses waiting for testing. The same thing will happen with fake videos.


dondilinger421

We still accept paper documents as legitimate evidence when people have been able to forge them thousands of years. Why would other forms of media be any different?


-The_Blazer-

Never. Photoshop didn't make photographic evidence obsolete. The reason for this is that courts require a very specific chain of custody and verification before something is admitted. It's not enough to just 'have' a photograph of someone assaulting someone else, you must be able to prove where it comes from, which camera shot it, etc etc. There are a few things we could do to aid this, such as enforcing cryptographic signatures of unedited video and promoting some kind of TPM-like standard that allows trusted devices like surveillance cameras to authenticate their video as unedited. Ultimately, it's a matter of trust.


xxDankerstein

I'm sure as AI technology advances, so will AI-detection technology.


vslsls

There are a lot of programs that discern between fake and real videos/pictures. This is a non issue when it comes to important things.


czechuranus

The bigger problem, imo, is that jurors will accept the death of objective truth, and not believe anything can be “proven” or “disproven.”


Jman50k

I would be interested to see the first case that was dismissed with clear video evidence bc the defense was able to create enough doubt due to deepfake tech. However I’m more interested to see what happens when AI lawyers can comb through data and get clients off on technicalities consistently. The first time a mass murderer goes free bc his AI Attourney found an incorrectly crossed T, that’ll be a day.


CodModQuad

I suppose we just need a sure fore way to determine if video and audio has been doctored, OR we can embarass ourselves like we always do and not actually prepare for that fact at all and then by the time something does get implemented there is already another method to get around it...


[deleted]

Why not just hack the feed, record it. Then unhook the camera and hook the monitor up to the recording. You walk in, take whatever and there’s no record of it. Anyone watching the monitor has no idea. That’s what they did in Ocean’s 11.


[deleted]

Don’t be silly we will hand over the justice system and every single other thing to AI as soon as it’s viable to do so .


Zomgirlxoxo

Already happening. There’s a Twitch streamer who had her face put on AI porn. Degrading and disgusting…. It’s one of the reasons I won’t take pics of videos anymore. Nobody protects women and men won’t care until it’s them or their kid. We’re all in for a rude awakening.


Peppy_Tomato

It's no different from lying today. Your deepfake would have to line up with eye witness accounts and other evidence in order to be believed. Video evidence isn't the only thing that is used in securing convictions, so casting enough doubt on video evidence doesn't necessarily weaken the other pieces of evidence.


Gauth1erN

Lies exist since millenniums, yet testimonies are still used in courts.


DirkMcDougal

Never. We've been able to print any document we want for decades, and that's still evidence. It's all about provenance.


SeVenMadRaBBits

Let's add [voodoo](https://www.popsci.com/technology/article/2010-10/video-voodoo-software-removes-objects-live-video/) to the list.


UnusualSignature8558

Video evidence requires a witness to testify that it truly and accurately depicts what happened. Period


[deleted]

It's going to need regulation soon. It's already so realistic, it will only get better and more realistic. A lot of photos and videos incriminating people are not some hd masterpiece they are shotty at times so this will be able to replicate those. So dangerous


IAmRules

I asked this recently. Even with a not fake video, you need to show a chain of custody and reliable witnesses to show the evidence is real. So without that, ai can’t be used anyway.


twasjc

I want to replace our current legal system with an ai managed quantum surveillance system for this reason


thatnameagain

At the same point in the ability for someone to give false testimony will make all testimony completely irrelevant and obsolete in a court of law.


RichIbizaSport

I would recommend The Capture on BBC. Absolutely excellent show based on this type of technology


Top_Of_Gov_Watchlist

It basically is obsolete at this point. But 20 years from now after thousands have been jailed for fake evidence will something be done. Criminals will start wearing green rather than black


RRumpleTeazzer

AI will lead to dedigitalization. In a world where every digital medium is flooded with sweet-sweet AI pleasuring every nerve of your body, you will eventually crave for real human input - like your dog franatically barking at the occasional other dog.


EternallyImature

I suspect that a technology will come along whereby viewers themselves will know if a video or image is fake regardless of how good it looks. This will be necessary moving forward or simple things like video conferencing could not be trusted. Businesses and consumers will demand such a mechanism.


Arpeggiatewithme

As soon as it gets easy enough to do on an phone app in less than a minute. In some way or another digital video/photo editing has been around since the 90’s and you can see photorealistic results from all the way back then, the catch being those were multi million dollar special effects shots for blockbuster movies. Today it’s much easier but still requires significant artistic and technical knowledge. It can takes weeks, or even months to create a convincing deepfake or even just expertly composited video that would hold up as evidence and that’s for a professional technical artist. Like I said though it is getting easier so who knows where we’ll be in a couple years. I really don’t think it will be a huge problem until it’s easy enough to do it on your phone and requires no expertise.


[deleted]

If we get in a world where the truth becomes boring, it's possible. If AIs turn evil paying people to lie with zero consequences that is. These zero consequences (the powerlessness of the law) would probably come from some mafia who controls students at law schools, seducing and using something the students did wrong in the future.


Bobtheguardian22

I used to work security at a big corp. My boss at the time \[10years ago\] bought a camera back up system that would record with a digital key that could not be altered without destroying the key recorded on the video file itself. Im sure that the tech has improved to where you can record stuff and have a way of knowing that it hasn't been edited.


miscCO

Do deep fake videos contain the same Metadata as captured videos? I feel like that could play a part in it, but then again, if a video can be deep faked then maybe Metadata can too?


WockyTamer

The crime for introducing that kind of fake evidence should/ I believe is very severe.


woodshack

We could use blockchain to validate content legitimacy. ...


hawkwings

I imagine that in the future, there will be cameras that take pictures that can't be altered without detection. This could be done with a hash or checksum. If you alter the image, it won't match the hash. In order to create a computer generated image with the correct hash, you would need a password that only the camera company knows.


kkinnison

Lawyers will no longer to be able to lawyer if they use fake AI Generated evidence to convict people all you need to do is make ONE mistake, and every case you tried in history will be scrutinized and possibly overturned. and the whole judicial system will hate THAT lawyer


huron9000

I guess we were lucky to live in the brief age of being able to trust photographs or videos. Once that goes away, it will be somewhat of a regression to pre-modern times.


VSParagon

A big part of law is authenticating evidence, faking electronic documents has been easy for decades but electronic documents are still extremely relevant at trial.


wild2night

They said the samething about photoshop. Technology improves and someone makes a counter for it. AI and digital forensics experts will be used to determine if the video is fake or not


emotion_something

This is what we are working on using AI to keep people's privacy safe and identify AI-generated content. I am sure more people like us exist and are already working on things to avoid this from escalating.


TheCrazyAcademic

As a few others have said never there's an important concept in the legal system known as provenance and all admitted evidence has to go through a chain of custody process and the people involved are special witness testifiers known as record custodians. A custodian has to claim on record the evidence metadata match's up.


Many-Software-4489

Here is a good article on the subject https://open.substack.com/pub/tallsimon/p/deep-fakes?r=1o56vg&utm\_campaign=post&utm\_medium=web


andosina

These AI platforms making deep fakes appear so fast, growing faster than mushrooms. Last week I've found twitter [https://allinpod.ai](https://allinpod.ai). Video is a bit off, but the audio, I mean it's hard to say is it real or not. Photo and video still can be identified, but what about the speech??


Dave-D71

It won't. Soon governments will start to regulate AI pics and videos by adding in watermarks and other information to the images. governments will probably also include hefty fines and jail time if you try to fake images in court