🧠 1. App Stores Have Rules — But They’re Hard to Enforce Deep AI MisuseBoth
Apple’s App Store and
Google Play Store have policies that
prohibit sexually explicit content, including apps that degrade or objectify people or create nudity. However:
- App reviewers often check apps based on how they are described and what they appear to initially promise.
- Many AI “undressing” or “nudify” apps don’t advertise overtly sexual features in their descriptions or screenshots, making them harder to catch during review.
- Some apps hide their true AI functions behind innocuous marketing or enable them after installation, so automated systems and human reviewers don’t always catch them on first review.
📱 2. Developers Exploit Loopholes or Misleading LabelsDevelopers often use terms like
“AI photo editor” or
“fun image tool” rather than “undress.”
- This can allow them to slip past initial review because the store guidelines are triggered mostly by visible metadata, not hidden behaviours.
- Apps with dual purposes (e.g., data-face filters or editing tools) may combine legitimate functions with harmful AI gimmicks, and stores struggle to evaluate every feature.
🔍 3. AI Deepfake technology Evolved Faster Than ModerationThe rapid rise of powerful AI tools that can manipulate images and videos has outpaced the ability of app stores to adapt:
- Even if policies exist on paper, detecting subtle or newly developed deepfake or nudity generation features often requires more advanced, specialized review tools.
- App review teams traditionally focused on malware or privacy issues — but AI abuse cases are a newer challenge that isn’t always easily flagged.
🧪 4. Some Apps Were Approved Long AgoIn many cases, the apps in question were uploaded
months or even years before the controversy erupted, and stores have been slow to
re‑review older apps when policies tighten or new abuses emerge.
- Even after reports revealed the problem, only some apps have been removed or suspended, while others remain available because enforcement lags behind discovery.
💰 5. Financial Incentives Can Complicate PolicingBoth apple and google earn a
revenue share (up to ~30%) from in‑app purchases and subscriptions.
- Apps that generate money — even problematic ones — can slip through because sheer volume and complexity make it hard to weed them out quickly.
- Large numbers of downloads (hundreds of millions collectively) show that these apps can be lucrative for developers and indirectly for the stores before enforcement catches up.
📌 6. After‑the‑Fact Enforcement Is Reactive, Not ProactiveMost of the removals or crackdowns happen
after researchers and watchdogs publicize the problem and
explicitly notify apple or google about specific offending apps.
- Without such reports, many harmful apps can stay on the platform simply because typical review processes didn’t detect their problematic features.
- Even after identification, some apps may be removed only partially or reinstated if developers make superficial changes.
🧾 In ShortApps that can digitally “undress” people ended up on big app stores because:
App submissions are reviewed primarily based on marketing and metadata, not always actual behaviour.
Developers disguise harmful features or hide them until after installation.AI capabilities evolved faster than moderation tools and policies.Stores sometimes fail to re‑review older apps or quickly enforce existing rules.Revenue incentives and sheer volume make policing challenging.These gaps mean harmful apps can stay available
until someone flags them and the stores take action, leading to public concern and pressure on apple and google to tighten enforcement.
Disclaimer:The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any agency, organization, employer, or company. All information provided is for general informational purposes only. While every effort has been made to ensure accuracy, we make no representations or warranties of any kind, express or implied, about the completeness, reliability, or suitability of the information contained herein. Readers are advised to verify facts and seek professional advice where necessary. Any reliance placed on such information is strictly at the reader’s own risk.