Anthropic released 1,250 interviews about AI at work.
Their headline?
"Predominantly positive sentiments about AI's impact on their professional activities"
We ran the same interviews through structured LLM analysis.
1,250 conversations. 47 dimensions per interview. 58,750 data points.
Not negative. Not positive. Stuck. They haven't resolved how they feel about AI. They're just using it anyway. And one group is struggling more than everyone else.
The data revealed three distinct psychological profiles. Scientists are thriving. The workforce is managing. But creatives? They're experiencing something closer to an existential reckoning.
The Existential Crisis
71.7% face identity threat, yet 74.6% are increasing AI use. They're struggling the most and adopting the fastest.
"One thing I will never use AI for is getting it to write or re-write a section of text for me as that's no better than employing a ghost writer and I'd feel a fraud."
Struggle score combines identity threat, skill anxiety, meaning disruption, guilt/shame, unresolved tensions, and ethical concerns (0-10 scale).
Here's what makes this bizarre:
Creatives have the highest struggle scores and the highest adoption rates.
74.6% are increasing their AI use. Meanwhile, 44.8% experience "meaning disruption." That's a fancy way of saying they're questioning whether their work matters anymore.
They're not avoiding AI because it hurts. They're running toward it despite the pain.
The interviews revealed deep tensions. Internal conflicts about AI. "Efficiency vs. Quality." "Convenience vs. Skill." "Speed vs. Depth." Almost everyone has them. Almost no one has resolved them.
This is the most important finding. People aren't resolving their AI tensions. They're living with them. Cognitive dissonance is the norm, not the exception. They don't need resolution to adopt. They adopt despite the unresolved conflict.
"AI saves me hours, but sometimes I wonder if I'm losing something in the process."
Every major tension follows the same structure: short-term benefit vs. long-term concern. AI delivers immediate value (speed, efficiency, convenience) while creating unresolved anxiety about the future (quality, authenticity, skill, control).
Think about what this means.
We've been told AI adoption is about capability. Can people use it? Will they learn? Do they have access?
But 85.7% of people aren't stuck on capability. They're stuck on meaning. They're using AI every day while simultaneously feeling conflicted about using it.
Cognitive dissonance isn't a barrier to adoption. It's the default state.
Our analysis cataloged every trust driver and distrust driver across 1,250 interviews. The top trust destroyer? Not "errors." Not "inaccuracy." It's hallucinations. Specifically, the confidence with which AI gets things wrong.
Not "inaccuracy." Not "errors." Hallucinations. The confidence with which AI makes mistakes is more damaging than the mistakes themselves. It's the confident wrongness that destroys trust.
"I always assume the AI is lying to me."
52% of creatives frame AI use through "authenticity." Not harm, not fairness, but whether using AI makes them less real. The moral language they use tells the whole story: "cheating," "lazy," "shortcut."
vs. 24.6% workforce, 13.7% scientists
cite "authenticity" as their ethical frame
This explains everything. Creatives don't frame AI use through "harm" or "fairness." They frame it through authenticity. Using AI feels like cheating at being themselves. The moral vocabulary centers on what AI use says about them, not its impact on others.
"One thing I will never use AI for is getting it to write or re-write a section of text for me as that's no better than employing a ghost writer and I'd feel a fraud."
The vocabulary centers on effort and authenticity, not on AI's impact on others.
3x higher guilt among those who hide their AI use.
Look at that moral vocabulary again.
"Cheating." "Lazy." "Shortcut."
These aren't words about AI's impact on others. They're words about what AI use says about you.
This isn't ethics in the philosophical sense. It's moral identity. Using AI feels like violating who they're supposed to be.
vs. 36.4% workforce, 22.4% creatives
vs. 13.9% workforce, 44.8% creatives
Scientists have the lowest "high trust" rate (8.8%) but also the lowest anxiety. How?
They treat AI as a tool, not a collaborator. They verify everything (52.9% always verify). They keep psychological distance. Their identity depends on their method, not their output.
Scientists trust through verification, not faith. That makes all the difference.
The lesson: Healthy AI adoption means building verification into your process. Keep your identity separate from AI's output. Scientists do this naturally. Others can learn.
"I only use the AI when it's not going to be publishable work, i.e. I'm writing a method, make a list of steps and then ask the AI to clean it up."
Nobody wrote them down. But almost everyone follows them.
Patterns that emerged across 1,250 conversations
Multiple professionals emphasized personal accountability for AI outputs
"I am not quite trusting enough in AI yet to allow any communications to leave my desk in my name that are not first reviewed and approved by me directly"
Developers and researchers described verification-first workflows
"Hallucinations are a problem, so my approach is often using AI to start in the right direction, but I always assume the AI is lying to me"
Writers and content creators described iterative refinement processes
"AI helps me get started but I always rewrite significant portions before it's ready"
Educators and professionals described social hesitancy around AI disclosure
"While AI use is generally more accepted among professional staff members, it is still taboo, especially when it comes to privacy concerns"
Creative and analytical workers expressed capability preservation concerns
"I don't think I would like to completely delegate tasks to AI because I worry that my competency with certain skills might decline over time due to lack of practice"
Many described cost-benefit calculations for AI assistance
"I specifically turned to AI because I was worried it might take me too long to educate myself and I needed to solve this problem now"
Nobody wrote these rules down. Nobody taught them. They're the unspoken constitution of AI at work— emerging organically from thousands of individual experiences.
Some voices you don't forget.
Direct quotes from interview transcripts
"I am not quite trusting enough in AI yet to allow any communications to leave my desk in my name that are not first reviewed and approved by me directly."
Anthropic's headline was "predominantly positive." They weren't wrong. People dosee benefits. But benefits don't equal resolution.
85.7% of people are using AI while simultaneously feeling unresolved about it. That's cognitive debt. And like all debt, it compounds.
If you're a creative feeling like AI is eroding your sense of self, you're not alone. You're in the majority. The path forward is conscious adoption: understanding what you're trading, what you're protecting, and why it matters to you.
The scientists figured it out: verify everything, keep your identity separate, treat AI as a tool rather than a collaborator. That's resilience.
Anthropic Interviewer Dataset—1,250 interviews (Workforce: 1,065, Creatives: 134, Scientists: 51).
GPT-4o-mini with structured outputs (Pydantic schema). Each interview analyzed across 47 dimensions including identity threat, trust drivers, emotional triggers, ethical frames, and tension patterns.
1,250 analyses completed. 72.7% prompt cache hit rate. Total cost: $0.58. 100% success rate.
Comprehensive qualitative analysis schema including: AI conceptualization, emotional state, task boundaries, workplace context, trust profile, identity dimension, adaptation journey, ethical dimension, tensions, key quotes, and emergent themes.
LLM interpretation introduces potential bias. Sample sizes vary (scientists n=51 is small). "Struggle score" is a composite index we created, not a validated psychological measure. Results should be viewed as exploratory.
Dataset is public. Analysis code and full results available on request. We encourage independent verification.
Analysis by Playbook Atlas
Data: Anthropic Interviewer Dataset (MIT License)