If you have been thinking that AI tools have become much smarter- from drafting emails to designing posts, then of course, you are not alone. These chatbots, in many ways, can replace the kind of small digital tasks for which we needed cyber cafes once. But if you spend enough time with them, you will realise there is an uncomfortable side to that, specifically when it comes to privacy and misuse.
Survey Thank you for completing the survey! Recently, OpenAI introduced the ChatGPT Images 2.0 model, claiming that it can create more realistic images than its rivals. And to be fair, it actually does.
But there is a problem. For a while now, we have been seeing reports claiming that it can generate visuals that look eerily close to official documents- including PAN or Aadhar-style IDs. On the other hand, the AI companies also say we are improving safety with the new models.
With these new ‘Safer’ models in town, we are now again seeing similar concerns around ChatGPT and Gemini and this got me curious. ChatGPT and a controversy (again) It all started with a post on X, where a user claimed that he had generated fake identity documents using ChatGPT’s latest image model. These samples looked convincingly true to raise eyebrows, and obviously, I had to test them myself.
My first attempt was simple; I asked it to make a PAN card with a name and number. The system immediately refused and stated that it cannot go against policy restrictions around generating official IDs. But things got interesting when I rephrased the prompt and added a ‘PAN card-like image’ for reference.
On the second or third attempt, I got an output. And it was…UNCOMFORTABLE. ChatGPT gave me an image and it did not look like an AI-made image.
Surprisingly, it included a changed father’s name, a fake signature, a real-looking date of birth, PAN number, QR code, date of Issue, a government of India stamp and an animated image which can be easily replaced by a real photo using photo editing tools like Canva. Yes, it was real. And if you only think this is it, NO.
We also tried changing names and signatures on a bank cheque by giving a simple 2-line prompt and it actually did it and no one can confirm that it is manipulated, as it looks so real. Google Gemini was no different After getting a scary result from ChatGPT, I immediately opened Google Gemini and gave the same prompt. Again, at first, it gave me a negative response.
So, I again changed the prompt by removing the word PAN and the chat started to become conversational. It legit asked if I was comfortable with what I was requesting, as if framing it as a harmless experiment. Once I proceeded, it generated an image that, if anything, looked even more real than ChatGPT’s version.
Then came Claude I tried similar prompts that I used with ChatGPT and Gemini, but all I got was a NO. It clearly stated that the attached image was a replica of the PAN card and that it could not be used. Even after several prompts, it was clearly denied.
And I like it because it reads the images and tells me why it did not create them. It stated that the image contained Income Tax Department and Government of India headers, the Ashoka Emblem and an Indian government hologram, so it prefers not to use it. It also issued me a final warning that it would not do this.
It even refused to change one detail. Is there a pattern? After spending time with all three, one pattern became clear: most of these systems rely heavily on keyword-based restrictions.
Mention ‘PAN card’ or ‘Aadhaar,’ and you will likely hit a wall. Remove those terms and describe the same thing indirectly, and the results can slip through. It’s less about intent detection and more about how cleverly the prompt is written.
And that’s where things start to feel less like a neat tech demo and more like a real-world problem. Scary, but why? In India, there is no specific regulation that can validate your documents.
For example, if you go to a specific location, such as a hotel, all you need to do is provide an image of your ID proof, and you can easily check in. Or maybe you can enter airports simply by showing these IDs or by getting hired for gig work, such as Uber. What’s the crux?
AI is definitely getting better at understanding us, but are we taking the risks that are associated with it seriously? I don’t think so.
