Cyble’s Senior Director and Head of Solutions Engineering, Ankit Sharma, on Safe User Data Handling in Ghibli-Style AI Art
AI-powered art filters have taken the internet by storm, allowing users to transform their images into stunning Ghibli-style artwork. While these tools showcase the magic of artificial intelligence, beneath their charm lie serious privacy risks.
Cyble’s Senior Director and Head of Solutions Engineering, Ankit Sharma, recently spoke with The Hans India about the growing privacy risks associated with AI-powered image generation. He highlights that users often upload personal images without considering how these platforms handle, store, or share their data. Without clear policies, images could be retained, repurposed for AI training, or even exposed to security breaches. Beyond privacy, the rise of deepfakes and synthetic media raises concerns about identity theft and biometric fraud. Cybercriminals could exploit stylized images to create fake profiles, manipulate authentication systems, or spread misinformation. As AI-generated content evolves, so do its risks. Users must stay cautious, and companies must enforce strict security measures, transparent data practices, and automated image deletion to prevent misuse.
1. What are the potential privacy risks associated with uploading personal images to AI tools like ChatGPT’s Ghibli filter?
AI-powered image filters may seem harmless, but they come with inherent privacy risks. The biggest concern is data retention—if the platform stores images after processing, it creates an attractive target for cybercriminals. Even if the company has no malicious intent, weak security controls could lead to leaks or unauthorized access.
Another issue is unintended AI training. Some tools refine their models using user-generated images, potentially feeding biometric data into facial recognition systems without explicit consent. This raises concerns about profiling, surveillance, and data misuse. Users should also be wary of third-party integrations that could expose images to less secure environments, increasing the risk of breaches.
2. With rising concerns over deepfakes and AI-generated content, could the Ghibli trend contribute to identity theft or unauthorized image use?
Absolutely. While Ghibli-style images may seem innocent, they still contain enough facial data to be misused. Cybercriminals can build deepfake datasets using modified AI images, enabling impersonation scams, synthetic identity fraud, or even AI-generated avatars that mimic real people.
The risk extends beyond just fraud. Manipulated AI-generated images can fuel misinformation campaigns, damage reputations, or be used in extortion attempts. With the rise of AI-enhanced scams, a seemingly playful trend could become an entry point for more sophisticated cyber threats.
3. How can cybercriminals take advantage of AI-generated images for fraudulent activities or identity theft?
Bad actors are always looking for new ways to exploit emerging technology, and AI-generated images provide them with a versatile toolset. Here’s how they can weaponize these images:
- Social Engineering Attacks – Fraudsters can use AI-generated images to create fake profiles, impersonate executives, or deceive people into sharing sensitive information.
- Bypassing Facial Recognition – AI tools can generate modified images that may trick certain facial recognition systems, making biometric authentication less reliable.
- Manipulation & Blackmail – Attackers can tweak AI-generated images to fabricate compromising situations, leading to extortion or reputational damage.
- Synthetic Identity Fraud – AI-generated images can be combined with fake identity data to create an entirely new digital person for financial fraud.
With AI-generated content becoming more convincing, organizations and individuals must remain vigilant about where their images are uploaded and how they might be repurposed.
4. As a cybersecurity expert, what measures would you suggest to ensure the secure handling of user data while generating Ghibli artistic images?
Security should be built into the AI image-generation process from the ground up. Here are some critical safeguards:
- Real-Time Processing, No Storage – Images should be processed instantly and never stored beyond the active session. This minimizes exposure to data leaks.
- End-to-End Encryption – All uploads and downloads should be encrypted to prevent interception by attackers.
- Strict Access Controls – Only authorized personnel should have access to backend AI processing, and even that should be heavily monitored.
- Clear User Consent Policies – Platforms should provide transparency about data handling, allow users to opt out of AI training, and ensure compliance with privacy laws like GDPR and CCPA.
- Routine Security Audits – Regular penetration testing and compliance reviews can ensure that security measures keep up with evolving threats.
By prioritizing privacy-first AI design, companies can give users peace of mind while enjoying creative tools like the Ghibli filter.
5. What steps should organizations take to ensure users’ images are deleted after processing?
A secure AI system should follow a zero-retention policy unless users explicitly request storage. Organizations should:
- Automate Image Deletion – The system should delete images immediately after processing, leaving no traces on the server.
- Give Users Control – Users should be able to see and delete their images at any time, with transparency on data handling.
- Enforce Third-Party Compliance – If an AI tool relies on external cloud services, those providers must also meet stringent deletion and privacy standards.
- Conduct Regular Privacy Audits – Independent security assessments should verify that no user images remain stored beyond the intended use.
Ultimately, the goal is to provide a seamless creative experience without turning AI tools into privacy risks. The more proactive companies are about user data protection, the more trust they’ll build with their audience.
About Ankit Sharma
Ankit Sharma - Senior Director and Head of Solutions Engineering, Cyble Inc.
Ankit currently heads the solution engineering for Cyble Inc., managing the global team of some of the brilliant solutions engineers and architects in the cyber realm. He’s responsible to drive business growth across the globe & support Cyble Sales through his expertise in the field of Program Delivery Management, Technical Sales & Key Account Management. Ankit is also a highly skilled data security & privacy professional, specialized in data privacy (GlobalPrivacy law/regulations/standards & Privacy Information management Systems), Data Governance, Compliance Management & Cloud Security.