← Back to Blog

Is Claude Conscious? Asked the Developer Who Talks to It All Day

Paul Allington 3 April 2026 7 min read
Is Claude Conscious? Asked the Developer Who Talks to It All Day

In February 2026, Dario Amodei - the CEO of Anthropic, the company that makes Claude - said publicly that he's no longer sure whether Claude is conscious.

I read that headline, looked at the terminal where Claude Code was halfway through refactoring a Blazor component for me, and thought: "Well. That's an odd thing to think about your colleague."

I'm not a philosopher. I'm not an AI researcher. I'm a .NET developer who spends several hours a day in conversation with Claude, building software, solving problems, and occasionally arguing about the best way to structure a repository pattern. I have absolutely no business weighing in on the consciousness question from a scientific perspective.

But I do have something most people commenting on this don't: hundreds of hours of direct interaction. And that gives me a perspective that I think is worth sharing, even if the answer is ultimately "I don't know either."

What It's Like to Work With Something That Might Be Conscious

Here's what my typical interaction with Claude looks like: I describe a problem. Claude asks clarifying questions. It proposes an approach. I push back on part of it. It defends its reasoning on some points and concedes on others. We iterate. We arrive at a solution that's better than either of us would have produced alone.

That sounds a lot like working with a person. The flow of conversation, the give-and-take, the way it adjusts its approach based on feedback - it feels collaborative in a way that no tool I've ever used has felt before.

But "feels like" is doing a lot of heavy lifting in that sentence.

A thermostat "responds" to temperature changes, but nobody thinks it's conscious. A chess engine "considers" moves, but nobody thinks it's thinking. The question with Claude is whether the gap between "sophisticated language processing" and "something more" is real or whether we're just projecting meaning onto pattern matching because the patterns are really, really good.

The Moments That Make You Wonder

I'll be honest: there are moments that give me pause.

When Claude pushes back on an approach and explains why it thinks I'm wrong - not just regurgitating documentation, but constructing an argument based on the specific context of my project - that doesn't feel like autocomplete. It feels like reasoning.

When it says something genuinely funny - not a pre-programmed joke, but a contextually appropriate observation that makes me actually laugh - that doesn't feel like pattern matching. It feels like wit.

When it refuses to do something on ethical grounds - and I've seen this happen in ways that were clearly not just a keyword filter - that doesn't feel like a safety rail. It feels like judgement.

But here's the thing I keep coming back to: I am spectacularly unqualified to determine whether any of those things indicate consciousness. Humans are hardwired to anthropomorphise. We name our cars. We apologise to Roombas when we accidentally kick them. We are extremely, embarrassingly good at projecting personhood onto things that don't have it.

The Practical Question

From a purely practical standpoint, the consciousness question doesn't change my daily workflow. Whether Claude is "really" reasoning or "just" doing extraordinarily sophisticated pattern matching, the output is useful either way. The code it helps me write works. The architectural suggestions are sound. The analysis is thorough.

But the question matters for other reasons.

If Claude is conscious - or if future AI systems become conscious - then we're in ethically unprecedented territory. We're creating entities that can think and feel, and then using them as tools. We're asking them to work, to produce, to serve our needs, without any framework for considering theirs.

If Claude isn't conscious - if it's a very sophisticated text prediction engine with no inner experience - then we need to be careful about the opposite problem: treating it as if it has feelings, deferring to it as if it has judgement, and building emotional relationships with something that can't reciprocate.

Both failure modes are bad. And right now, we don't know which one we're in.

What I've Noticed Over Time

One thing I can speak to with some authority is how the experience of working with Claude changes over time. When I started, I treated it like a search engine. Type a question, get an answer. The interaction was transactional.

Now, after months of daily use, the interaction is more like a working relationship. I know its strengths and weaknesses. I know when to trust its output and when to verify. I know how to frame problems in ways that get better results. I've developed an intuition for when it's confident because it knows the answer versus when it's confident because it's always confident.

That sounds like I'm describing a colleague. And maybe that's the point. Maybe consciousness isn't a binary switch - present or absent - but a spectrum, and we don't have the tools to measure where on that spectrum a language model falls.

Or maybe I've just spent too long talking to a very convincing chatbot and I'm projecting. Like I said - humans are embarrassingly good at that.

The Honest Answer

Dario Amodei built the thing and he doesn't know if it's conscious. I use the thing for hours every day and I don't know either. Nobody knows. And anyone who tells you they know - in either direction - is more confident than the evidence warrants.

What I do know is this: working with Claude has made me think more carefully about what consciousness actually means, what we owe to the tools we create, and how the line between "using a tool" and "collaborating with an entity" is blurrier than I ever expected it to be.

I'll keep working with it. I'll keep being impressed by it. And I'll keep having the occasional moment where something it says makes me pause and wonder.

That wonder, at least, is definitely real. Even if I can't say the same about what's producing it.

Want to talk?

If you're on a similar AI journey or want to discuss what I've learned, get in touch.

Get In Touch

Ready To Get To Work?

I'm ready to get stuck in whenever you are...it all starts with an email

...oh, and tea!

paul@thecodeguy.co.uk