all posts
2025-11-03·4 min read

Living with AI: Notes from Someone Who Uses It Too Much

Honest reflections on how LLMs have changed the way I write code, think about problems, and what that means for the craft.

aiengineeringtoolingreflection

I use AI tools every day. Multiple of them. Claude, Copilot, sometimes others. I've stopped apologizing for it.

But I want to be honest about what that actually looks like — not the optimistic pitch-deck version, not the doomer's nightmare, just what it's actually like to work this way.

The Honest Stack

I use GitHub Copilot for inline completions while I type. I use Claude for longer reasoning — architecture questions, refactoring ideas, understanding unfamiliar codebases, writing first drafts of documentation. I use Cursor when I want to work in a large context and apply suggestions directly to files.

These tools are not interchangeable. Each has a different feel. Copilot is reflexive — it completes your thoughts before you finish having them. Claude is more like talking to someone who's read everything. Cursor is a workflow, not just a model.

The mistake is thinking of these as one thing called "AI."

What Actually Changed

The biggest shift isn't in how much code I write. It's in where I spend cognitive energy.

Before AI tools, a significant portion of my mental budget went to things like:

  • Remembering the exact signature of an API method
  • Writing boilerplate that I'd written a hundred times before
  • Translating a clear idea in my head into syntactically correct code

That's still happening, but the friction is lower. The distance between having an idea and having a working prototype is shorter.

What I do now with the recovered bandwidth: I think more about architecture. I question my assumptions more often. I spend more time on the part of the problem that actually requires judgment.

Or, I fill it with Reddit. It goes both ways.

The Thing Nobody Talks About: Calibration

The skill that matters most when working with AI isn't "prompting" in the way people sell courses about.

It's calibration — knowing when to trust an output, when to verify it, when to throw it away entirely.

AI models are confidently wrong with some regularity. The confidence is the danger, not the wrongness. You get used to code that compiles but has a subtle logical flaw. You get used to explanations that are almost right but miss a critical detail.

The workflow I've landed on: use AI to generate, then read everything it produces like a code review. I'm still responsible for the output. The model is a very fast junior engineer who needs supervision.

When I remember that, the tools are excellent.

When I forget it — when I paste the output into production without thinking — I've made a mistake that's entirely mine.

The Craft Question

Every few weeks someone in my circle has a version of the same anxiety: are we losing something?

I think about this genuinely. There's something about struggling through a hard problem — staring at it, writing bad solutions, deleting them, arriving at an elegant answer through iteration — that does something useful to your brain. It builds pattern recognition that's hard to acquire any other way.

If AI eliminates that struggle, does it eliminate the development of the engineer?

My tentative answer: it depends on how you use it.

If you treat AI as an oracle that produces code you don't understand, you're outsourcing the learning. If you use it as a sounding board — you think, you generate, you read critically, you understand — then you're still doing the cognitive work that matters.

The craft lives in the judgment. The judgment lives in the reading and the questioning. The tools don't remove that. They just let you do it faster.

What I've Stopped Worrying About

I've stopped worrying about whether it's "cheating." That ship has sailed and the destination was always fine.

I've stopped worrying about looking like I don't know things if I use AI to look them up. I've been using Stack Overflow for that since 2011.

I've stopped defending AI to skeptics and stopped arguing with enthusiasts. Both camps have already decided.

What I'm Still Thinking About

The thing I'm still genuinely uncertain about: how do you maintain mastery when the tools make it easier to avoid the thing that builds mastery?

This is a real tension, not a rhetorical one. I'm not sure I've solved it. I try to notice when I'm reaching for AI out of laziness versus when it's genuinely the right tool. The line blurs.

I think the answer is something like: stay curious about the underlying thing. Don't let the tool be a black box. When something AI generates surprises you, figure out why it works.

The question "why does this work?" will outlast any particular tool.


I wrote most of this post myself. An AI suggested the title.