Back to all articles
AI & Deepfakes

AI Deepfakes: What to Do When Someone Creates Fake Videos of You

January 4, 202611 min read
AI Deepfakes: What to Do When Someone Creates Fake Videos of You

You receive a link from a concerned friend. In the video, you're saying things you never said, doing things you never did—but it looks exactly like you. Your face, your voice, your mannerisms. Except it's not you at all. It's a deepfake: AI-generated synthetic media that puts your face on someone else's body or creates entirely fabricated video and audio of you. Welcome to the new frontier of impersonation, where artificial intelligence can create convincing fake content of anyone using just a handful of photos or a few seconds of audio. This guide explains how deepfake technology is being used against ordinary people, how to detect it, how to fight back, and what protections exist in this rapidly evolving threat landscape.

Understanding Deepfake Technology (And Why It Matters to You)

Deepfakes use artificial intelligence—specifically, machine learning models called GANs (Generative Adversarial Networks) and newer diffusion models—to create synthetic media. What once required Hollywood budgets and expert teams can now be done by anyone with: A few photos of your face (social media provides plenty). Basic technical knowledge (or access to user-friendly apps). A few hours of processing time. What deepfakes can create: Face swaps—putting your face onto someone else's body in video or photos. Lip-sync manipulation—making you appear to say things you never said. Full synthetic video—generating entirely new video of you from scratch. Voice cloning—recreating your voice from audio samples to say anything. The technology is improving rapidly. What looked obviously fake two years ago can be nearly undetectable today. This isn't science fiction—it's happening to ordinary people right now.

How Deepfakes Are Being Used for Impersonation and Fraud

NON-CONSENSUAL INTIMATE IMAGERY: The most common malicious use. Your face is placed onto explicit content without your consent. This may be distributed as harassment, used for blackmail, or sold as fake adult content. AI ROMANCE SCAMS: Scammers create video of 'you' to build trust in romance scams. Victims video chat with what they think is a real person—but it's deepfake technology showing your face with real-time manipulation. BUSINESS EMAIL COMPROMISE: Deepfake audio (voice cloning) is used to call employees and request wire transfers in an executive's voice. Cases have resulted in losses exceeding $35 million. DISINFORMATION AND REPUTATION ATTACKS: Synthetic video of you making offensive statements or engaging in inappropriate behavior. Even when debunked, the damage to reputation persists. IDENTITY VERIFICATION FRAUD: Deepfakes are used to defeat 'liveness checks' and facial recognition systems, potentially opening accounts or accessing services in your name. BLACKMAIL AND EXTORTION: Criminals create compromising deepfake content and threaten to release it unless payment is made. Even though the content is fake, the threat of release is real.

Tired of Fighting This Alone?

We remove impersonation accounts in 24-72 hours. Free consultation to assess your case.

Get Help Now

Detecting Deepfakes: What to Look For

While detection is becoming harder as technology improves, current deepfakes often show telltale signs: Visual artifacts: Blurring around the face edges, especially near hairline and ears. Inconsistent lighting between face and body. Unnatural eye movements or blinking patterns. Teeth that look wrong—too perfect, blurry, or inconsistent. Facial asymmetry that changes frame to frame. Background inconsistencies or warping. Audio tells: Flat or robotic-sounding speech patterns. Breathing sounds that don't match mouth movements. Words that sound slightly 'off' or artificially smoothed. Metadata and context: Source verification—where did this content originate? Does the context make sense? Would this person actually say/do this? Technical analysis: Deepfake detection tools (like Microsoft's Video Authenticator or Intel's FakeCatcher) analyze videos for manipulation signals. While not perfect, they can help identify synthetic content. When in doubt: If someone shows you suspicious content of yourself, don't panic. Document it, but know that provable fakes are often easier to discredit than they first appear.

Fighting Back: Removing Deepfake Content

Deepfake removal requires aggressive, multi-platform action: DOCUMENT IMMEDIATELY. Even though it's disturbing, save copies of the deepfake content. Record URLs, screenshot platforms where it appears, note dates and sources. This evidence is essential for removal and legal action. REPORT AS NON-CONSENSUAL INTIMATE IMAGERY. Most major platforms now have expedited review processes for NCII (non-consensual intimate imagery). Use these specific reporting channels—they're faster than general content reports. FILE DMCA TAKEDOWN NOTICES. Your face is your likeness. Unauthorized use in commercial content (or content that drives platform revenue) is actionable. DMCA takedowns have legal force and require compliance. CONTACT HOSTING PROVIDERS. If content is on websites rather than major platforms, identify the hosting company and submit abuse reports and takedowns directly to them. USE SEARCH ENGINE REMOVAL TOOLS. Google expedites removal of non-consensual intimate imagery from search results—services like <a href="https://removefromgoogle.com" target="_blank" rel="noopener" class="text-primary hover:underline">RemoveFromGoogle.com</a> can help navigate this process. Even if the source content persists, removing it from search results limits distribution. ENGAGE LEGAL RESOURCES. Attorneys specializing in cyber civil rights can send cease-and-desist letters, file subpoenas to identify anonymous creators, and pursue litigation. CONTACT SPECIALIZED ORGANIZATIONS. The Cyber Civil Rights Initiative (cybercivilrights.org) provides resources. Some legal aid organizations offer free help for NCII victims.

Tired of Fighting This Alone?

We remove impersonation accounts in 24-72 hours. Free consultation to assess your case.

Get Help Now

Legal Protections Against Deepfakes

The legal landscape is rapidly evolving: State laws: At least 48 states have laws against non-consensual intimate imagery, and many now specifically include synthetic/AI-generated content. States like California, Texas, Virginia, and New York have laws specifically targeting deepfakes. Federal legislation: The DEEPFAKES Accountability Act and similar proposals are working through Congress. Existing laws around fraud, identity theft, and defamation may apply depending on how deepfakes are used. Civil remedies: Regardless of specific deepfake laws, you may have claims for: Invasion of privacy. Intentional infliction of emotional distress. Defamation if the content is presented as real. Right of publicity violations. Copyright infringement (your photos were likely used to train the model). Criminal prosecution: If deepfakes are used for fraud, harassment, stalking, or extortion, criminal charges may be possible under existing laws. International considerations: If the creator is overseas, enforcement becomes complicated—but platforms hosting the content are still obligated to respond to proper takedown requests.

Protecting Yourself in the Age of AI

Complete protection from deepfakes is impossible, but you can reduce risk and improve response: Reduce your digital footprint. Every photo and video of you is training data for potential deepfakes. Consider what you share publicly and whether it's necessary. Diversify your online presence. Having verified accounts across multiple platforms with consistent identity makes it easier to prove what's real. Create an 'authenticity anchor.' Consider creating verified content (like a video stating your real accounts, or using content authentication tools) that can serve as proof of identity. Monitor proactively. Set up Google Alerts for your name. Periodically search for your name combined with concerning terms. Consider monitoring services that scan for your likeness. Know your response plan. Before it happens, know who you'd call (attorney, professional removal service), what platforms' reporting processes are, and how you'd communicate with your network. Stay informed. Deepfake technology and legal protections are evolving rapidly. What's cutting-edge today may be detectable or illegal tomorrow. The AI genie isn't going back in the bottle. Deepfakes will become more sophisticated and more accessible. But so will detection tools, legal protections, and platform responses. You're not helpless—you're adapting to a new landscape where vigilance and rapid response are the best defenses.

Ready to take back your identity?

Transparent pricing. No complicated forms. Professional results.