How I Actually Use Claude Code (Not How I Expected To)
I thought AI coding meant generating code. Turns out the value is in the conversation.
I've been using Claude Code for three months.
My code generation has decreased. My velocity has increased.
That's not a contradiction.
What I Expected
Prompt: "Write a function that validates email addresses" Output: 40 lines of regex and edge case handling Me: Copy, paste, ship
That's the dream, right? AI writes code, humans review and merge.
What Actually Happens
I rarely ask Claude to write code anymore.
Instead:
Me: I'm seeing intermittent 502 errors on the /api/upload endpoint.
The logs show the request reaching the handler, but no response logged.
Load balancer timeout is 30s. What should I investigate?
Claude: Let's narrow this down. A few questions:
1. Do the 502s correlate with file size?
2. What's between your handler and the load balancer?
3. Are you streaming the response or buffering?
Three questions. Each one pointed somewhere I hadn't looked.
The file size correlation led me to discover our reverse proxy had a 10MB body limit. Not a code bug. A configuration issue. Claude never wrote a line of code, but saved me hours of debugging.
The Pattern
My most productive Claude sessions follow this structure:
1. Describe the symptom, not the solution
Bad: "Write a retry mechanism for HTTP requests" Good: "Users report the app shows 'connection failed' even when their internet works. It happens maybe 1 in 20 requests."
The first prompt assumes you know the solution. The second lets Claude help diagnose.
2. Share context aggressively
Me: Here's our current error handling:
[paste 30 lines]
And here's what users see:
[paste error message]
The disconnect is: we catch the error, but users still see it.
More context = better diagnosis. Claude can hold the full picture.
3. Ask for trade-offs, not answers
Bad: "What's the best way to cache this?" Good: "I could cache in Redis, in-memory, or at the CDN level. What are the trade-offs for a read-heavy, write-occasional pattern?"
Claude excels at comparison. It's seen every approach and knows the edge cases.
What's more valuable than asking AI to generate code?
Describing symptoms and asking for diagnosis — using AI as a senior engineer pairing with you
Click to reveal answer
What I Still Use Code Generation For
- Boilerplate - Test setup, configuration files, repetitive CRUD
- Unfamiliar territory - "Show me how to set up a WebSocket connection in this framework"
- Refactoring - "Convert this callback-based code to async/await"
These are translation tasks. The logic is known; I just need syntax.
The Productivity Gain
| Task | Before Claude | With Claude |
|---|---|---|
| Debugging mystery issues | 2-4 hours | 30-60 min |
| Learning new library | 1-2 days | 2-4 hours |
| Architecture decisions | Gut + Google | Informed trade-offs |
| Writing code | Same | Same |
Code writing speed didn't change. Everything around it did.
When is AI code generation most useful?
Translation tasks — boilerplate, unfamiliar syntax, and refactoring where the logic is already known
Click to reveal answer
The Mindset Shift
AI coding tools are marketed as "copilots"—they write, you review.
Better mental model: AI as senior engineer in a pairing session. They don't grab the keyboard first. They ask questions, suggest approaches, spot patterns you've missed.
The value isn't generated code. It's compressed experience.
Three months in. Less code generated. Better code shipped.
The AI writes less so I can think more.