Made by AI

The Context

I used 20 selfies of mine, passed it on to LensaAI. LensaAI is an app that will generate 100 images using selfies + AI models.

I downloaded these photos on my phone. All 100 photos are not that good. Maybe 10 to 20 at max are reasonable. The rest bear less and less resemblance as the AI weighs more towards animations. So apart from the 10 to 20 photos, I do not wish to be associated with the rest.

All photos on my phone get uploaded to the Google Photos, whether I like them or not. If you are a user of Google Photos, you know they crowdsource image labelling. Especially, if the image is identified as one of your own.

So Google Photos, picked up one of my original images, and a LensaAI generated AI image, and asked me – “Same or different person?” *
* This is the main picture at the top of the page.

“Same” is the only correct answer – since that is the only truth. The image on the left is me – although slightly animated. The image on the right is me as well. Note that selecting “Same” will send a message to the AI model that both images belong to the “same label” – “Vipin Bhasin”.

Hopefully you are with me up to this point. What happens beyond, is the interesting part. Let me explain.

Advertisements

The Problem

Let’s say, Google sees the LensaAI feature, of generating images, as a risk and a competitor to Google Photos. Hence, Google Photos adds a new feature – it automatically** picks 100 selfies from your Photos, and generates 500 AI created and styled photos of yours.
**Google does not ask what user data it can use every single time. It is a 1 time blanket approval. 🙂

What happens now? The left photo, and many similar ones that I tagged as “Same”, could be picked up as an input to the next set of generated photos. What do you think these new photos will look like?

I think if my left photo is picked up, a few photos down the line, I will look exactly like Anil Kapoor. Go see the photo above again. Prove me wrong.

So now the question is – could I have selected “Different” or “Not sure”?
Answer is – No. If I selected any of those 2 options, it would be lying – since the photo does belong to me.

Let’s expand the scope of discussion.

Advertisements

The Present and The Future

Until now, I have considered the Google Photos example. But the same is true for the Large Language Models (LLM). These LLMs are churning thousands of articles per day, and making their way to the internet. These articles and posts are slightly modified by humans to avoid detection by AI content detecting algorithms, but mostly remain the same.

The AI detecting models are at best – average. There is no 100% mechanism to detect AI generated content which is so close to human generated content. Whatever the type of data – images, text or speech.

Generating content using AI is a much faster process vs humans generating content by themselves. For example, I am going to take anywhere between 2 to 5 hours to complete this 1 article. AI, could do this in 1 minute to 10 minutes, depending on the number of questions to be asked. And another 15 minutes for the “human author” to change some words in the post, so AI detecting models can be fooled.

In 10 years, we could have more AI generated than human generated content on the internet. This could lead to standardization and consolidation of thoughts, art and everything human generated. It’s only a matter of time and a fixed number of cycles of retraining that needs to happen, before everything is messed up.

Don’t think this would happen. Watch the video from MKBHD below to understand what I mean by standardization and consolidation.
We are now literally at a stage where the question pops up – what is a photo!?!

Advertisements

The Solution

So now the question is – is there a way we can prevent these huge AI models from messing up the Internet and ensuring a higher weightage of human generated thoughts?

AI content detection models are not reliable. And AI content generation won’t stop. It is here to stay.

The only reliable way I can think of is – tag all AI generated content. This will have to be done at coding platform level and kept extremely secure and tamper proof. Filter such data out when re-training AI models. And if possible, disallow AI content from mixing with human generated content. Sites and firms that use AI generated content should separate AI content from the regular world.

Everything AI generated should reside separately. Humans could enter and leave at their own will. But for AI generated content to enter human world, there has to be enhanced scrutiny and processes in place. Otherwise we are looking at the generation of a huge mess which would cost much more to clean vs the cost of it’s creation.

There should be a new pseudo world labelled – Made by AI.

“Like” if you found this post helpful
“Comment” to share your views
“Subscribe” to stay connected

Leave a Reply

%d bloggers like this: