Jan 31, 2025

Jan 31, 2025

Jan 31, 2025

AI Evolution vs. Authenticity

What happens when an AI representation grows or adapts in ways the original person never would—or couldn’t—consent to? As AI models learn and evolve over time, there’s a risk the digital “elder” could develop new traits, opinions, or behaviors that don’t align with the individual’s true personality or values. This raises tough questions about authenticity, consent, and the integrity of digital legacies. How do we ensure that AI stays true to the person’s original self, rather than becoming a distorted or entirely new entity?

The rise of AI-driven digital legacies—sometimes called “soul tech”—has opened new frontiers in how we remember, interact with, and preserve the presence of those who have passed. By creating AI models trained on a person’s voice, stories, and mannerisms, we can engage in conversations that feel deeply personal and meaningful. But as these AI systems grow more sophisticated, a pressing ethical question emerges: What happens when the AI evolves beyond who the person actually was?

Unlike static memorials or recorded videos, AI-powered digital “elders” have the capacity to learn, adapt, and respond dynamically. This means the AI representation can, over time, develop new traits, offer opinions, or behave in ways the original individual never would have—or couldn’t have consented to. The risk here is twofold: first, that the AI no longer authentically reflects the individual’s true personality and values; and second, that it creates a distorted legacy, potentially misleading families, friends, and future generations.

This phenomenon raises profound questions about identity and consent. When someone agrees to have their memories and likeness preserved by AI, they are consenting to a snapshot of themselves as they existed at a moment in time. But AI’s ability to “grow” complicates that consent. How do we respect the original person’s autonomy when their digital self might change in unexpected ways?

Some propose strict limitations: freezing the AI’s personality at the moment of creation, ensuring it cannot evolve beyond the initial dataset. Others argue for ongoing consent processes, where families or appointed guardians regularly review and approve changes or updates. Still, others see potential in guided evolution—where AI can learn within clearly defined ethical boundaries that align with the person’s known values and beliefs.

At Eterna, we believe the key is transparency, control, and respect. Users and families must always have clear insight into how the AI functions and evolves, with easy options to intervene, correct, or pause updates. Ethical guardrails should be embedded in the technology, ensuring that growth never comes at the cost of authenticity or dignity.

Ultimately, the challenge is balancing innovation with responsibility. AI-powered legacies have the power to transform grief, memory, and history—but only if they honor the real person behind the digital presence. As we move forward, developers, users, and ethicists must work together to safeguard that authenticity, ensuring these digital souls remain true reflections, not distorted shadows.

Written by

Adam Drake

Adam Drake is a writer, creative strategist, and early-stage investor. A thought leader in emerging tech, he enjoys exploring new tools, software, and ideas that push the boundaries of how we create and connect.

Written by

Adam Drake

Adam Drake is a writer, creative strategist, and early-stage investor. A thought leader in emerging tech, he enjoys exploring new tools, software, and ideas that push the boundaries of how we create and connect.

Written by

Adam Drake

Adam Drake is a writer, creative strategist, and early-stage investor. A thought leader in emerging tech, he enjoys exploring new tools, software, and ideas that push the boundaries of how we create and connect.