Scaling the Mountain
When I started my new job, I was staring at a mountain.
New language - C# hadn’t been in my stack for over a decade. New platform - Azure, which I’d never worked in. New architecture - microservices on Kubernetes, which I’d used briefly but never owned. And I was responsible for all of it. Not eventually. Immediately.
During my job search, I’d gotten feedback from a few companies that I lacked experience with agentic coding workflows. I’d used GitHub Copilot, which is little more than smart autocomplete, but I hadn’t built anything real with Claude Code or anything like it. I kept meaning to, but I never found the idea that motivated me enough to push through the learning curve. Then the job arrived and brought the motivation with it.
The way I used it mattered
I could have asked Claude to just do things for me. Write this migration. Fix this error. Configure this cluster. And it would have. But I would have been back in the same spot the next day, stuck on the next thing, with nothing to show for it except a diff I didn’t fully understand.
Instead, I asked it to show me. How does the ORM work here? What should I be looking for when I see this error? What are the tradeoffs in how this is configured? When it made a recommendation I wasn’t sure about, I pushed back and asked it to explain its reasoning. Sometimes I was wrong. Sometimes it was. Either way, I learned something.
That distinction - show me how versus do it for me - made all the difference.
What happened over the next few months
Within a couple of months, I went from genuinely lost to genuinely capable. Not just functional, but capable. I understood why things were built the way they were, what the failure modes looked like, and how to think about changes without breaking what was already working.
The questions I was asking Claude started to change. Early on it was “how does this work?” and “what does this error mean?” Now it’s “here’s what I want to accomplish - walk me through a plan to get there.” I am no longer trying to figure out the system. I’m working on elevating it. Zero-downtime deployments, better rollback, untangling the parts of the architecture that have drifted into chaos.
That shift didn’t happen because Claude gave me the answers. It happened because I asked for the understanding behind the answers.
The thing about learning curves
There’s a version of AI tooling that makes you faster without making you better. You ship more, you understand less, and eventually the gap between what you can produce and what you actually know becomes a liability. I’ve seen that pattern before, in other contexts. It doesn’t end well.
What I found, and what I think is the more interesting use of these tools, is that AI can compress a learning curve dramatically if you treat it like a teacher rather than a vending machine. The mountain doesn’t get smaller. But you get wings, and a good updraft.
You still have to do the work of understanding. But you don’t have to do it alone, at your own pace, against whatever documentation happened to get written.
I’m still learning. That part hasn’t changed. But for the first time in a while, I’m doing it faster than the job is throwing things at me.