From Convenience to Competence: What AI Actually Demands of Us [LinkedIn Post]

30 March, 2026

Linked in Post | Full LinkedIn Article

I have spent the last few weeks immersing myself in a variety of development projects, using a mix of Claude and CoPilot. This may surprise readers of my earlier vibes/VIBES post, but the challenge is not whether to use these tools, but how to understand what kind of assistance they actually offer, and at what cost.

I will focus here on relatively approachable work: building and modifying a WordPress site, small automation efforts, incremental refactors. This is intentional: there is less to this than, say, creating and troubleshooting a production CI/CD pipeline, but the patterns are quite similar. The same strengths show up, the same challenges repeat, and the lessons transfer more cleanly across the tech stack than most people expect.

That said, I’ll always try to also at least nod to the comparable lessons for larger, more enterprise focused work in these posts.

My initial sense of working with Claude and CoPilot was that it felt like working with the most over-eager, enthusiastic, junior intern you’ve ever met. BUT, the intern has memorized all the documentation. ALL. THE. DOCUMENTATION. And that has impactful, material value. Removing the burden of accessing all of that information is, at least at first, magnificent.

Anyone who has veered even slightly off the happy path in a platform like WordPress knows how much context lives just outside the task itself. Plugin creation and management, child themes, CSS placement, even just the core configuration of the platform—none of this is conceptually hard, but all of it creates friction. It’s the kind of friction that slows work, breaks flow, and disproportionately punishes people who don’t spend most of their time working inside these systems.

Claude (and the others) simply know this stuff. You can stay focused on what you’re trying to accomplish, while it fills in the surrounding structure. So that’s pretty amazing, and certainly a massive time-saver.

That alone would make these tools useful, offering small windows into enhanced productivity as a sort of hyper-optimized search engine that can be accessed via natural language queries. But there’s more here, and accessing that additional value is, I think, the critical bar for success in AI implementations, regardless of the size/scope of the context.

It all revolves around navigating how you leverage the application of the immense knowledge held by the tool.

Claude’s reactions are inconsistent at times, over-engineered at other times, and often in need of constant course correction and attention. And, quite dangerously, it will agree with your input immediately and enthusiastically, regardless of its quality.

A concrete example: Claude proposed a complex, JavaScript-based solution to change the appearance of some text. I suggested using standard CSS instead, which Claude-immediately and enthusiastically-accepted as a better approach. There are dual risks here. Solutions that look clever in the moment often undermine long‑term maintainability, and the tool’s willingness to pivot means you have to maintain a fairly high level of skepticism about the reliability of its initial recommendations.

The phrase “human in the loop” has gained fairly widespread use, and this is its application: as a user, you need to know the structure, the overall best practices, the general parameters of the architecture you’re working on or in so you are able to help guide Claude along its way. Indeed, guide may be too weak: you need to be able to critique the tool, to insist on things you know are true, to demand enough insight from it to convince you otherwise.

The key skills end up being yours. Your judgment, your pattern recognition, your instincts around architecture and troubleshooting. These are what organizations need, at the enterprise, to couple with the knowledge embedded in the AI tool; they are also what you need if you want more than surface-level gains.

There’s a bit of legitimate magic here, however: you can use the same AI tool to educate yourself on those best practices and to create a feedback loop that corrects the AI behavior. If you build into your interactions an active practice of asking why a given approach is preferred, or what alternatives exist, or what tradeoffs you’re accepting by doing something one way instead of another; you will often arrive at a far better, far more sound solution.

And you can do this from a place of learning: you don’t need to enter the interaction knowing about child themes in WordPress, but you do need to have the analytical skills necessary to ask the right questions, to integrate the answers, and to eventually arrive at a shared understanding of the best practice. That said, crucially, you never need to know exactly how to implement them.

This meta-usage, reflexively folding the tool back onto itself, using it to interrogate and critique and refine and improve its own output, is the single greatest lever we have for meaningful impact. This is what will define our ability to do more than merely automate mediocrity. That’s where the next article will focus: not just on using AI, but on using it to raise the bar for itself.

This entry was posted in Culture and tagged , , . Bookmark the permalink.

Leave a Reply