TER General Board

We all hate overly edited pics
36363jensen 4 Reviews 718 reads
posted

Saw this kind of interesting article about spotting deepfake images.

 
While that is not often (might change in the future I suspect) problem here I think some of the suggestion related to AI generated images apply. For those that find they get fooled more than they like might find some pointers for the next ad pics they are looking at.

While we're not quite there yet, there is a time coming, and soon, where AI systems will not make mistakes while generating fully synthetic images. Concurrent to that, those that are trying to fake something are getting smarter about avoiding 'tells' in their images. Those two curves cross at a certain point and any hope of an image being real is essentially gone.

 
  On a slightly longer time scale, likely within 5 years or so, we'll also have fully synthetic AI video. That's very much where the fun begins, not just here, but everywhere. The world's been dealing with fake, photoshopped images for awhile now, but video hasn't really been a problem, and it'll go from that, to a huge problem quickly enough that the world won't be ready.

 
  On an even longer time scale, likely within 12 years or so, we'll be able to fully synthesize video in real time, which means not even face time will save you.

 
Where that leaves you is needing some way for a human to 'sign' something that's digital in a way that can be verified by anyone, and cannot be tampered with. That doesn't exist yet, but, the first guy to come up with an easy, accessible, way to do it is going to make a mint.

It doesn't solve the problem and merely adds an additional step, but what about "trusted authorities" by which I mean organizations who gain and then maintain a high level of accuracy and credibility?
.
E.g., "This is WTER Boston, your local verified news source. Our reporter, John Doe, attended this news conference and attests to the authenticity of the video we are about to show you. ..." "This is KTER Los Angeles, your local verified news source. It has been reported elsewhere that certain events took place at a news conference today. Our reporter, Mary Roe, attended that news conference and attests to the inaccuracies in the video being broadcast elsewhere that we believe may be due to AI. Here is her breakdown: At 1:23 the real Mayor had not yet arrived; what you see is an AI "Mayor" speaking at what was actually an empty lectern. At 3:45, ..."
.
It does just open the door to false AI authentication of other AI, but organizations could get cumulative accuracy ratings on their AI / non-AI (real) assessments and a trustworthiness score. It would have nothing to with the content ("The Coach lied about that player being injured." Not the point.) but is only about AI vs Real ("That was really the Coach giving that interview, not an AI Coach.")

Posted By: justsauce16
Re: Authenticity
While we're not quite there yet, there is a time coming, and soon, where AI systems will not make mistakes while generating fully synthetic images. Concurrent to that, those that are trying to fake something are getting smarter about avoiding 'tells' in their images. Those two curves cross at a certain point and any hope of an image being real is essentially gone.  
   
    
   On a slightly longer time scale, likely within 5 years or so, we'll also have fully synthetic AI video. That's very much where the fun begins, not just here, but everywhere. The world's been dealing with fake, photoshopped images for awhile now, but video hasn't really been a problem, and it'll go from that, to a huge problem quickly enough that the world won't be ready.  
   
    
   On an even longer time scale, likely within 12 years or so, we'll be able to fully synthesize video in real time, which means not even face time will save you.  
   
    
 Where that leaves you is needing some way for a human to 'sign' something that's digital in a way that can be verified by anyone, and cannot be tampered with. That doesn't exist yet, but, the first guy to come up with an easy, accessible, way to do it is going to make a mint.

You don't need to quote the parent message, TER replies are threaded, save yourself the trouble. Also, a carriage return followed by another carriage return and a non-breaking space (accessed by holding alt, and typing 2-5-5 on the numpad) will give you the extra break you're after, sans-period. I believe we've been over this previously.

  
 I terms of authentication, that much is fairly simple on it's face. You're using it right now to access TER. Granted TER uses a 3rd party, Cloudflare CCA, for that job, there are a few ways you can provide that sort of thing without a trusted 3rd party, all unfortunately involving blockchain sidecar stuff, but, 'good enough' I suppose. That just proves that, the computer that sent you this site to view was the computer that you requested and not a malicious 3rd party trying to pretend.

 
 The problem you're seeing is one only kindof solved by crowdsourcing. Reviews can be faked, forum posts can be faked, online identities can be faked, so really, even TER is vulnerable to that sort of manipulation. TER, of course, has incentive to not have that happen, but a news agency has the opposite incentive, so they cannot be authoritative. Where we're headed is a world where, if you didn't see it with your own eyes *and* hear it with your own ears you cannot trust it. That's not tenable for our increasingly online society, so we do need a solution, and if you're operating like we are, against the will of the crown, you need a solution that doesn't involve the crown.  

  We can do better though. Ultimately cryptography is math, and math is more trustworthy than any government, you just need to make that math more accessible, because right now you'd be calculating your own PGP/RSA/ECC keys, and then someone needs to verify them.

I guess your advice to people today is, what exactly? Throw in the towel and just ignor the pics or believe them and hope that is a good belief?

 
My post was what people can do today rather than what they will not be able to do in the future. Seems a bit more helpful but maybe I'm just weird like that.

 
Seems like PKI technology already exists to solve the problem you describe and simply needs to be applied to this setting. Camera takes picture and puts a cryptogram in the image that is encrypted with a public key. Anyone wanting to verify the image just uploads the image (or image URL) to some site that will either have access to the private key or forwards the request on to the image owner's equipment. Then, for example, the the cryptogram gets opened and the original check-sum value/MD/Hash is compared with the one calculated of the image to be verified. Sure. that's the 30,000 foot view but the solutions you suggest we need are already in place in much of our day-to-day modern life. But is there that big of a need for that type of solution -- are fake images that big of a problem? Not yet and I agree we're heading there but finding the solution is not going to be hard. Implementatoin a bit more difficult (will have to be some form of herding the cats to some industry standard) but it doesn't seem to be a technical problem.

BTW, seems like some people are already working on the general problem of alteration.

Alteration is a pretty easy thing to solve. Synthesis is much, much harder, maybe impossible.  

 
  The solution isn't to throw in the towel, it's to, at bare minimum, understand what's happening around you. Most of the dudes on TER aren't particularly tech savvy, so, I often try to provide a crumb of awareness to help them out. Practically what you need is a reasonable suspicion, and you can't be reasonable if you don't know what you're up against.  

 
 In terms of need, it's fairly clear to me that there is, in fact, a need, given that 3rd party verification systems already exist. They're universally shit, but, they appear to be popular enough to survive, so, stands to reason that if you can remove the need for a user to trust them, and instead trust math, you're able to provide a net-gain.

 
  Like I said though, the problem isn't tech, the tech is, at very least, plausible. The problem is the UX, and figuring out how to host it without getting a visit from the free candy van. P2P is mostly ready to go at this point, we have pseudo P2P verification already on sites like HX, but that still relies on a trusted 3rd party, and users vouching directly for other users is tenuous from a privacy perspective. What you need is something more akin to ring signatures, where an authenticity token is exchanged from provider to customer in an untraceable manner, such that your 'trust network' is protected from itself. Then you have to convince the ladies to use it, get traction with them and you're basically set to take over. I have no plans to do this, but, It's a fun thought experiment I mull around with sometimes.

 
  Also, the issue with cryptograms (aka steganography) is that they're not durable. If that image is resized, converted, optimized/recompressed, it will read as invalid even if it is valid. That and, again, all you're proving in that scenario is that it was signed by someone. If that someone's entire identity is fake, you don't gain anything by proving that their content came from the same person. Basically as good as a PGP key, which is to say, not really adequate on its own.

So this will be a response to both your post and Jensen post

 
Generally, a digital signature is given to data that isn't expected to be routinely modified. Images are a subject to modification. Influences can't live without filters. And even bigger issue that you touched on is compression. 99% of images one sees on the web are heavily compressed so as to limit the size..digital signatures are binary. If even one bit is changed the data is not authentic. Whereas if one pixel in an image was changed, it's pretty much the same to the human eye. Not to mention exif Metadata (such as geolocation etc) is already stripped by any uploading service.  

 
Next, the pki paradigm answers the question is my data coming from the source it's claiming to be coming from. Ie did this data actually come from this source. Its generally to prevent man-in-the-middle attacks.  

It's a little bit different from "was this picture actually taken by a camera, any camera". Even assuming each camera in the world is unique  and each camera puts out it's public key on the internet (so assuming they're all digital and have a connection to the internet which is big enough) you still don't know which camera it is. So, it may be easy for an owner of the photo to "prove" his shot is real, but it may not be so easy for an arbitrary picture's authenticity to be proven false (or true)

First, this is more of a side track as the post was about what people can do today to help increase their confidence that they are seeing either unmodified images or lightly modified images. Doesn't solve the fake picture problem but I would think for those who have felt at the mercy of the ad the article might provide some help for improving their efforts. At least some might be able to head into a session with more reasonable expectations about what they will find or have more confidence in "just moving on the the next" lady of interest. Either one of those would seem to be an improvement for at least some of the people who have complained here after seemingly taken the ad pics at face value. I've gone back and looked as some of the ads those posts pointed to and wonder why on earth the guy ever belived them. But altered images is a recurring theme on TER.

 
That 30,000 foot view I suggested was not a proposed solution but an illustration of how a solution might work. But I admit as written it was suggesting just that. PKI s more than just authenticating the other party. It includes message integrety to insure the message was not altered in transit.  Check-sums and hashes might work but I am certain that there are alternatives to that for images -- those are just really cheap and known ways to provide a check on file or message integrety. DMR was (is?) apparently doing something with wave form and certainly had no issues with compression. Something based on light frequency that worked as a fingerprint for an image may well be possible and might also be invariant with sizing or compression. I don't think anyone has had a need to do something like that (and may well still not be a big enough need unless some in this community wants to do the hard work).

-- Modified on 3/22/2024 4:40:35 PM

Llm and incremental improvements usually mean that these issues will be fixed soon, maybe even next month. Lol.

Sure -- it's an ad picture right?

 
But that doesn't really work for me. I can do things like not have strong expectations about the validity of the picture so don't expect to find the person looks like the image. That lets me consider the woman without preconceptions to see if I do find her attractive on her own in someway or not. Then I can ignor the disconnect between image and reality and perhaps have a better time than if I just kept thinking about some false image or the discrepancies. But that's a different aspect of the game than finding ways to be more confident in one's assessment of the ad.

For a lot of people - in fact I'd say it's the majority of mongers - the picture in the ad is the main reason they choose a particular provider. I can't think of any other industry where a visual of product/service would be so overwhelmingly important before making the purchase.

Sure which is exactly why they should use more critical reasoning and evaluation skills than believing what they see.

Register Now!