The Tech Trends Culture Internet Culture Deepfake influencers and the ethics of virtual personalities
Culture Internet Culture

Deepfake influencers and the ethics of virtual personalities

Deepfake influencers and the ethics of virtual personalities

The line between reality and simulation on social media has always been blurry, but the rise of deepfake influencers and virtual personalities has erased it almost entirely. We are witnessing a fundamental shift in the creator economy: the emergence of “people” who do not exist, yet command millions of followers, sign six-figure brand deals, and influence real-world purchasing decisions.

From the hyper-realistic aesthetics of lifestyle models like Aitana Lopez to the stylized digital rendering of Lil Miquela, Artificial Intelligence (AI) and Computer-Generated Imagery (CGI) are redefining what it means to be an “influencer.” This technological leap offers brands unprecedented control and cost savings, but it also opens a Pandora’s box of ethical concerns regarding transparency, body image, labor rights, and the nature of truth in advertising.

Scope of this guide: In this guide, “deepfake influencers” and “virtual personalities” refer to digital characters managed by humans or agencies that present themselves on social media as entities with distinct lives, opinions, and aesthetics. This covers both AI-generated faces swapped onto human bodies (deepfakes) and fully 3D-rendered characters (CGI). We will explore the mechanisms of their creation, the business incentives driving their adoption, and the profound ethical questions they raise for consumers and regulators alike.

Key takeaways

  • Definition Nuance: Virtual influencers range from fully CGI creations to AI-generated “deepfakes” that use machine learning to synthesize realistic human features.
  • Brand Appeal: Companies prefer virtual talent for their scandal-free nature, 24/7 availability, and absolute creative control over messaging.
  • The Trust Gap: The core ethical conflict lies in transparency—whether followers know they are interacting with a synthetic entity or believe it is a real person.
  • Psychological Impact: Perfection in AI models exacerbates unrealistic beauty standards, potentially harming user self-esteem more than heavily edited photos of real humans.
  • Regulatory Pressure: Governments and platforms are moving toward mandatory labelling of AI-generated content to prevent consumer deception.
  • Economic Shift: Human creators face new competition from digital entities that don’t need sleep, paychecks, or breaks, raising questions about the future of creative labor.

What are deepfake influencers and virtual personalities?

To navigate the ethics, we must first understand the technology. The term “deepfake influencer” is often used interchangeably with “virtual influencer,” but there are technical distinctions in how they are built and operated.

1. The Virtual Influencer (CGI)

These are characters created using 3D modeling software similar to what is used in video games or animated movies. They are often stylized and do not strictly attempt to pass as photorealistic humans in every shot.

  • Example: Lil Miquela began as a CGI character. While she looks realistic, her texture and lighting often give away her digital nature.
  • Creation: Artists build a 3D mesh, rig it for movement, and place it into real-world photos or digital backgrounds.

2. The Deepfake Influencer (AI-Generated)

This category relies on Generative Adversarial Networks (GANs) or diffusion models. Deepfake technology learns facial landmarks and textures from thousands of images to synthesize a face that can be swapped onto a human body actor or generated from scratch.

  • Example: Many modern “AI models” on Instagram use Stable Diffusion or Midjourney to generate hyper-realistic static images that are indistinguishable from photographs.
  • Mechanism: An agency might hire a body double for movement and posing, then use AI to impose the “face” of the virtual brand ambassador, ensuring consistency across thousands of posts.

3. The Hybrid “vtuber” Model

While Vtubers (Virtual YouTubers) usually use anime-style avatars, the underlying technology—motion capture driving a digital puppet—is evolving. High-fidelity avatars allow streamers to present a photorealistic face that mirrors their real-time expressions, effectively becoming a live deepfake.

Why brands are embracing the artificial

The rapid proliferation of these digital entities is not an accident; it is a calculated move by marketing agencies and brands. The incentives are overwhelmingly economic and operational.

Absolute creative control

Human influencers are unpredictable. They can get sick, age, change their political views, or become embroiled in scandals that toxicify the brands they represent. A virtual personality does none of these things unless their creators want them to. A brand can dictate the exact caption, the precise lighting, the specific clothing, and the posting schedule down to the second. There is no risk of a virtual influencer making an off-color joke in a resurfaced tweet from ten years ago because they did not exist ten years ago.

Cost efficiency and scalability

While the initial setup of a high-quality 3D model is expensive, the long-term running costs can be lower than hiring top-tier human talent.

  • Travel: A virtual influencer can be in Paris for a morning shoot and Tokyo for an evening gala without buying a plane ticket. Backgrounds are simply swapped or rendered.
  • Scale: Once the workflow is established, an agency can generate hundreds of images per week. An AI model does not experience burnout or require rest days.

High engagement rates

Ironically, fake people often generate very real engagement. The novelty factor drives curiosity. Data suggests that virtual influencers can see engagement rates up to three times higher than their human counterparts. Audiences are fascinated by the technology and the storytelling involved in “crafting” a life for these characters.


The ethical landscape: where it gets complicated

While the business case is clear, the moral implications are murky. The rise of deepfake influencers introduces distinct ethical hazards that society is only beginning to grapple with.

1. The transparency problem: “Is she real?”

The most immediate ethical violation occurs when a virtual profile attempts to pass as a human without disclosure. This is known as “black box” influencing.

  • Deception: If a user follows an account believing they are interacting with a real person, and that account recommends a skincare product claiming “it cleared my acne,” that is a fundamental lie. A digital skin mesh cannot have acne, nor can it be treated.
  • Trust Erosion: When the reveal eventually happens (and it usually does), it can feel like a betrayal to fans who invested emotional energy into the persona.
  • Best Practice: Ethical agencies include tags like #VirtualInfluencer, #AI, or #Robot in the bio and posts. However, many newer accounts engaging in “AI fishing” deliberately obscure this to farm engagement from unsuspecting users.

2. Unattainable beauty standards 2.0

Social media has long been criticized for promoting unrealistic body images through filters and Photoshop. AI influencers take this to a dangerous extreme.

  • The “Perfect” Human: AI models are often generated based on aggregated data of what is deemed “most attractive” by algorithms. This results in personalities with mathematically “perfect” symmetry, impossible waist-to-hip ratios, and flawless skin texture.
  • Impact on Self-Esteem: For young users, comparing themselves to a curated photo of a real human is damaging enough. Comparing themselves to a digitally fabricated entity that literally has no flaws is a recipe for dysmorphia. These influencers normalize a standard of beauty that is not just rare—it is biologically impossible.

3. Diversity, appropriation, and “Digital Blackface”

One of the most contentious issues is the creation of diverse virtual influencers by non-diverse teams.

  • The Issue: When a white-led agency creates a Black, Asian, or Latinx virtual influencer, they profit from the aesthetic and cultural capital of that identity without facing the systemic challenges real marginalized creators face.
  • Digital Blackface: Shudu, arguably the world’s first digital supermodel, is a dark-skinned Black woman created by a white male photographer. Critics argue this commodifies Blackness, allowing brands to check a “diversity box” in their campaigns without actually hiring or paying Black models.
  • The Counter-Argument: Proponents argue that art should not be restricted by the creator’s identity and that virtual diversity contributes to representation. However, the ethics of who gets paid for that representation remains a sticking point.

4. The “Synthetic Relationship” trap

Humans are hardwired to form social bonds. We project feelings onto inanimate objects, pets, and now, AI agents. Deepfake influencers often use first-person language (“I feel so sad today,” “I love you guys”) to cultivate parasocial relationships.

  • Emotional Manipulation: When an AI script engages in vulnerability—complaining about a breakup or expressing anxiety—it is simulating human emotion to drive engagement. Monetizing this simulated vulnerability can be seen as emotionally manipulative, especially when the target audience includes lonely or vulnerable individuals.
  • The Loneliness Economy: As AI chatbots become integrated with these influencers, fans can hold conversations with them. While this can alleviate loneliness, it risks replacing genuine human connection with a transactional, programmed interaction that cannot offer true reciprocity.

Regulatory frameworks and the push for disclosure

Governments are recognizing the risks associated with synthetic media. As of early 2026, the regulatory landscape is shifting from “wild west” to “strictly labelled.”

The call for mandatory watermarking

Regulators in the EU and parts of Asia are moving toward requiring visible watermarks or metadata on AI-generated content.

  • Platform Responsibility: Major platforms like TikTok, Instagram, and YouTube have rolled out features allowing (and in some cases requiring) creators to toggle a label indicating content is AI-generated.
  • Enforcement Challenges: Policing this is difficult. While metadata can be stripped, forensic AI detection is improving. The goal is to make “undeclared deepfakes” a violation of terms of service, potentially leading to demonetization or bans.

Advertising standards

The Federal Trade Commission (FTC) in the US and the Advertising Standards Authority (ASA) in the UK are scrutinizing how virtual influencers disclose endorsements.

  • The “Material Connection”: Current laws require influencers to disclose if they are paid to promote a product. For virtual influencers, the “material connection” is absolute—the character is a commercial entity.
  • Misleading Claims: If a virtual influencer promotes a mascara saying “Look how much length this gives my lashes,” regulators treat this as a misleading claim because the “lashes” are rendered pixels, not the result of the cosmetic product. Brands must now be careful to frame these endorsements as artistic representations rather than product tests.

How it works: the workflow of a virtual influencer agency

Understanding the production pipeline helps demystify the “magic” and grounds the ethical discussion in reality. Here is what the process typically looks like in practice.

  1. Character Concept: The agency defines the persona. Age, backstory, personality traits, fashion style, and political alignment are drafted like a character in a novel.
  2. Asset Generation:
    • CGI Route: 3D artists sculpt the face and body using tools like Blender, Maya, or Unreal Engine.
    • AI Route: A dataset of images is curated to train a model (LoRA or similar) on the specific facial features of the character to ensure consistency across different lighting and angles.
  3. Content Production:
    • Photos are taken of real locations or body doubles.
    • The digital face is composited onto the body double.
    • Lighting matches are calculated to ensure the shadow on the digital nose matches the shadow on the real neck.
  4. Copywriting: A creative team writes the captions, responding to comments in the character’s voice. This is often where the “human” element comes in—real writers injecting wit, slang, and emotion into the machine.

Common mistakes brands make

When companies rush to capitalize on the trend without ethical consideration, they often face backlash.

  • The Uncanny Valley: If the rendering is almost human but slightly off (dead eyes, weird mouth movement), it triggers a biological revulsion response in viewers. This kills trust instantly.
  • Tone-Deaf Activism: Using a virtual influencer to speak on complex social justice issues (e.g., Black Lives Matter, LGBTQ+ rights) often backfires. Audiences see it as performative corporatism, asking, “Why does this pile of code care about civil rights?”
  • Hiding the Creator: Brands that try to anonymously run a virtual profile to create “organic” buzz often get “doxxed” by internet sleuths. When the corporate ownership is revealed, the illusion of authenticity shatters.

Case studies: The spectrum of virtual reality

To illustrate the diversity in this field, let’s look at three archetypes of virtual personalities (note: these represent types of prominent figures in the space).

1. The Lifestyle Model (e.g., Aitana Lopez)

  • Vibe: Pink hair, fitness enthusiast, gamer.
  • Model: Purely AI-generated static imagery.
  • Revenue: Earns thousands monthly through brand partnerships and exclusive content platforms (like Fanvue).
  • Ethical Friction: Often hyper-sexualized. Critics argue she exists solely to monetize the male gaze through an algorithmically optimized female form.

2. The Narrative Storyteller (e.g., Lil Miquela)

  • Vibe: Gen Z fashionista, musician, drama-filled backstory (breakups, robot awareness).
  • Model: High-end CGI.
  • Revenue: High-fashion partnerships (Prada, Calvin Klein), music streaming.
  • Ethical Friction: Blurred lines of reality. In one controversial campaign, she was shown “kissing” a real supermodel (Bella Hadid), which sparked a queerbaiting controversy because the interaction was entirely manufactured by a brand.

3. The Brand Mascot 2.0 (e.g., Colonel Sanders Virtual)

  • Vibe: A silver-fox, hipster version of the KFC icon.
  • Model: CGI used for specific campaigns.
  • Revenue: Direct brand marketing.
  • Ethical Friction: Low. Everyone knows it is a joke/ad. This represents the “safest” use of virtual influencers because the satirical intent is clear.

Who this is for (and who it isn’t)

This approach is for:

  • Tech-forward fashion and beauty brands: Companies selling an aesthetic or a fantasy often align well with the artistic nature of virtual influencers.
  • Gaming and Metaverse companies: The audience is already native to digital avatars; the friction is lower.
  • Experimental marketing campaigns: Brands looking for a short-term viral moment or a PR hook.

This approach is NOT for:

  • Trust-based industries: Healthcare, finance, or legal services. You do not want a non-existent person giving medical advice or selling life insurance.
  • Brands relying on “raw” authenticity: If your brand ethos is “unfiltered,” “natural,” or “grit,” a perfectly rendered AI face is the antithesis of your message.

Future outlook: The democratization of digital clones

As we look toward the latter half of the 2020s, the barrier to entry for creating deepfake influencers is collapsing. What used to require a Hollywood VFX studio can now be done on a gaming laptop.

We are approaching an era of “Personalized Influencers.” Imagine an Instagram feed where the influencer’s face subtly shifts to look more like you, or more like what you find attractive, dynamically adjusting in real-time. This level of hyper-personalization represents the final frontier of the attention economy.

Furthermore, we will likely see the rise of “Digital Twins”—real influencers licensing their likeness to be used by AI. A famous actor could “appear” in ten different commercials in ten different languages simultaneously, all generated by AI while the real actor sleeps. This creates a new legal class of asset: the rights to one’s digital geometry.

Conclusion

Deepfake influencers are not a fad; they are the logical evolution of a social media ecosystem that rewards visual perfection and consistent content output. For brands, they offer a seductive mix of control and novelty. For audiences, they offer entertainment and aesthetic pleasure.

However, the ethics of virtual personalities cannot be ignored. As we populate our feeds with entities that look human but feel nothing, we risk eroding our ability to distinguish between the manufactured and the real. The path forward requires a commitment to radical transparency. Brands must label their bots, creators must disclose their tools, and audiences must remain vigilant critical thinkers.

In a world where anyone can be anyone, authenticity becomes the most scarce—and valuable—currency of all.

Next steps: If you are a consumer, audit your “following” list and check which accounts are real people versus curated agencies. If you are a marketer considering AI talent, draft a robust “AI Ethics Policy” for your brand before commissioning your first render to ensure you stay on the right side of consumer trust.


FAQs

1. Are deepfake influencers legal? Yes, creating and running a deepfake or virtual influencer is legal. However, they must comply with advertising laws. For example, in the US, the FTC requires disclosure of commercial relationships. If a virtual influencer claims to use a product, it must be clear that the “experience” is a paid endorsement, and ideally, that the entity is not human to avoid misleading consumers about product results.

2. How do virtual influencers make money? They monetize exactly like human influencers: brand sponsorships, affiliate marketing, merchandise sales, and creating content for subscription platforms. Because they don’t have living expenses (food, rent), their profit margins can be higher, though the “salary” goes to the team of artists and writers behind them.

3. Can an AI influencer be sued? The code itself cannot be sued, but the agency or individuals creating and managing the influencer certainly can be. If a virtual influencer defames a real person, infringes on copyright (e.g., using a likeness of a celebrity without permission), or engages in false advertising, the human owners are liable.

4. What is the difference between a deepfake influencer and a VTuber? A VTuber (Virtual YouTuber) typically uses an anime-style avatar rigged to a real human who provides the voice and motion in real-time. The human is the distinct “soul” of the character. A deepfake or AI influencer is often more curated; the voice might be synthesized, the images generated statically, and the “personality” might be written by a committee rather than performed by a single actor.

5. Do people actually trust virtual influencers? Surprisingly, yes. Studies suggest that while people find them slightly less “authentic” than micro-influencers, they often find them more entertaining. For fashion and aesthetic inspiration, trust in their “taste” is high, even if trust in their “humanity” is non-existent. However, trust evaporates quickly if the account tries to hide its virtual nature.

6. Is using AI models considered false advertising? It depends on the product. If an AI model is used to sell clothing, it is generally acceptable (similar to a mannequin). If an AI model is used to sell foundation, mascara, or anti-aging cream, it is highly risky and likely false advertising, as the image shows computer-generated skin, not the actual performance of the cosmetic product.

7. Can I create my own AI influencer? Technically, yes. Tools like Stable Diffusion, Midjourney, and FaceSwap apps have democratized the creation of consistent characters. However, building a following requires storytelling, marketing strategy, and consistent high-quality output, which remains a significant amount of work regardless of the tools used.

8. Will AI influencers replace real human creators? They will likely replace some types of creators, particularly catalog models and generic lifestyle influencers whose primary value is looking good in clothes. However, they are unlikely to replace personalities who trade on deep human connection, humor, unscripted moments, and relatability. Humans crave connection with other humans who share their struggles, something AI cannot genuinely replicate.

References

  1. Federal Trade Commission (FTC). “Disclosures 101 for Social Media Influencers.” FTC.gov. https://www.ftc.gov/business-guidance/resources/disclosures-101-social-media-influencers
  2. Miquela Sousa (Lil Miquela). “About Miquela.” Official Website/Instagram. https://www.instagram.com/lilmiquela
  3. Harvard Business Review. “The Rise of Virtual Influencers.” HBR.org. (Searchable via HBR archives for recent articles on synthetic media marketing).
  4. The Verge. “The ethics of AI-generated influencers.” TheVerge.com. (Focus on articles covering Shudu and digital blackface controversies).
  5. European Parliament. “EU AI Act: first regulation on artificial intelligence.” Europarl.europa.eu. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  6. Advertising Standards Authority (ASA). “Influencers’ guide to making clear that ads are ads.” ASA.org.uk. https://www.asa.org.uk/resource/influencers-guide.html
  7. Sami, H., & Kizza, J. M. (2023). “Ethical Implications of Deepfake Technology.” International Journal of Information Security. (Academic context on deepfake ethics).
  8. Ogilvy. “The Annual State of Influence 2024: Influence in the Age of AI.” Ogilvy.com. (Industry report on AI impact).

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version