Researchers from Google and Stanford University have demonstrated that just 120 minutes of conversation is sufficient for AI models to create functional replicas of human personalities that can mirror their counterparts’ responses with 85% accuracy.
The study, published in the preprint database arXiv on November 15, involved creating “simulation agents”—essentially AI clones—of 1,052 individuals based on in-depth interviews.
These weren’t just superficial chats; participants shared life stories, personal values, and opinions on societal issues, providing the AI with rich contextual data that typical surveys or demographic information might miss.
What makes this research particularly noteworthy is that participants were encouraged to highlight what mattered most to them personally, allowing the AI to capture the nuances of individual priorities and perspectives.
But Haven’t We Always Known AI Can Mimic People?
The accuracy level achieved in this study represents a significant leap forward from previous attempts at creating digital simulations of human behavior.
While chatbots and virtual assistants have long attempted to sound human, they typically rely on generic patterns rather than capturing individual-specific traits.

This research demonstrates that modern AI can do far more than simply sound human—it can actually predict how specific individuals would respond across a range of scenarios.
The researchers tested these AI replicas by having both the human participants and their AI counterparts complete identical assessments, including the General Social Survey (a standard tool for measuring social attitudes), the Big Five Personality Inventory, and economic decision-making games like the Dictator Game and Trust Game.
When the results were compared, the AI agents matched their human counterparts’ responses with 85% accuracy—an unprecedented level of personality replication that suggests we’ve crossed a threshold in AI’s ability to model human behavior.
The Simulation Sweet Spots (And Blind Spots)
The AI replicas didn’t perform equally well across all tasks.
They excelled at replicating responses to personality surveys and determining social attitudes—areas where people tend to have consistent patterns of thinking that can be identified through conversation.
However, the digital doubles struggled more with predicting behaviors in interactive economic games, where social dynamics and contextual nuance play larger roles.
This limitation reveals that while AI can effectively model many aspects of human personality, it still falls short when complex social interactions and real-time decision-making come into play.
Lead study author Joon Sung Park, a doctoral student in computer science at Stanford, described the potential future implications in stark terms: “If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future.”
Practical Applications for AI Clones
The researchers envision these simulation agents serving legitimate scientific and social purposes rather than just being digital novelties.
According to their paper, this technology could enable:
- Testing public health policy effectiveness without the time and expense of large-scale human trials
- Understanding potential responses to product launches through simulated consumer reactions
- Modeling human reactions to major societal events that might be too costly, challenging, or ethically complex to study with real participants
“General-purpose simulation of human attitudes and behavior—where each simulated person can engage across a range of social, political, or informational contexts—could enable a laboratory for researchers to test a broad set of interventions and theories,” the researchers wrote.
These AI simulations could also help pilot new public interventions, develop theories around causal relationships, and increase our understanding of how institutions and social networks influence human behavior.
Ethical Concerns and Potential Misuse
The researchers themselves acknowledge the technology’s potential for abuse.
We already live in a world where AI and “deepfake” technologies are being weaponized for deception, impersonation, manipulation, and abuse online.
The ability to create increasingly accurate personality simulations could amplify these threats.
Imagine an AI trained to mimic your communication style based on your public social media posts, then used to send convincing scam messages to your friends and family.
Or consider the privacy implications of companies using personality simulations to test manipulative marketing strategies without the consent of the people being modeled.
These concerns highlight the need for robust ethical frameworks and regulatory oversight as the technology develops.
The line between beneficial research tools and invasive digital replicas requires careful navigation.
The Controlled Testing Revolution
Despite these legitimate concerns, the researchers emphasize that simulation agents could revolutionize aspects of human behavior research by providing highly controlled test environments without the ethical, logistical, or interpersonal challenges of working with human subjects.
This approach could democratize certain types of research that currently require significant resources, allowing smaller institutions and developing nations to conduct sophisticated behavioral studies that would otherwise be out of reach.
It could also accelerate research timelines dramatically—imagine running thousands of simulated interactions in the time it would take to recruit and interview a single human participant.
Living Digital Echoes
As this technology continues to advance, we may face profound questions about identity, autonomy, and the nature of personal data.
If a simulation agent can effectively stand in for you in certain contexts, who owns that digital echo?
What happens when your simulated self makes decisions or expresses opinions that you wouldn’t?
Do we need legal frameworks for “personality rights” similar to those for image rights?
The 85% accuracy mark achieved in this study suggests we’re approaching a threshold where these questions will require not just theoretical discussion but practical answers.
In the meantime, the research provides a fascinating glimpse into how quickly AI can now capture the essence of human personality—and raises important questions about the boundaries we should establish around this rapidly evolving capability.
As we continue along this path of increasingly accurate personality simulation, we would do well to consider whether we want a future filled with digital replicas making decisions on our behalf, or whether some aspects of human identity should remain exclusively human.