"Why learn to code now that AI can write code for you?"
This makes a lot of sense, especially with modern LLMs, such as Claude Opus 4.7 (released today, as of writing) with incredible ability to code, document, and understand problems.
Learning to code was never mainly about memorizing syntax, learning a single language, or mastering a framework. It is about abstraction — learning how to think in systems, break problems into parts, test assumptions, and turn messy constraints into something that works. AI can help with all of that, but it does not replace the need for it. Just today I was working through a problem, making use of an LLM while coding, and I asked if it thought my solution has any weaknesses, it said no then immediately pushed to Git . I was so mad. I mean, sure, I thought it looked good but I wasn't done yet - but it decided I was, without asking me, to just finish up there and then. This type of interaction is not new - where LLMs want to help so much that they overstep.
A farmer decides she wants to increase her production of eggs. Getting more chickens is cheap and not a problem, but storage for feed, bedding, litter, and other resources is limited. Wisely, she decides to build a barn.
As you might be working out this is a metaphor and not an article about farming.
So, this farmer has a robot, who is VERY good at building. She instructs it to build a barn (a bit of vibe coding, if you will) and it does, but the farm is built on soggy, wet ground. The materials used for building are heavy, and so the structure sinks. The barn door faces the wrong direction, and the roof is aesthetically beautiful but impractical.
What went wrong? The robot followed instructions perfectly, but it lacked the judgment to consider the context and constraints. "Build a barn" isn't something that can be solved in a million ways she didn't want. Thankfully, the farmer is wealthy, and tries again. This time she says "Build a barn, but build on dry ground, use lighter building materials, orient the barn correctly, and ensure the roof has proper ventilation and lighting."
So the robot goes ahead and makes various changes, and at first the barn looks great. But the first storm comes in sideways, the roof lifts at one edge, water gets into the feed, and the whole thing starts smelling like wet straw and bad planning. The robot did exactly what it was told, and by the objectives originally set, did its job well. It just never understood the land, the weather, the load, or the purpose of the building beyond the words it was given.
A polished build can still fail if design ignored weather, load, and context.
Eventually the farmer stops giving better and better instructions to a machine that only knows how to obey. She calls in an architect to design the structure properly, and a builder who knows what that design will demand once wood, wind, weight, and time get involved. The robot can still help. It can cut, lift, measure, and move faster than anyone else on the farm. But now it is part of a plan instead of pretending to be the plan.
So what has changed with the introduction of AI in coding? Well - weirdly, not much, from an architectural or project management perspective. Yes, we use AI a lot. But we still need the software developer to clearly guide requirements, define constraints, and make code-specific decisions. We still need a tester to ensure test cases are not generic but actually meaningful to the project specifically.
Code is just one output. Structured thinking is the asset.
Absolutely - it even writes cleaner code than rushed seniors too. That is not the point.
The point is that code quality is contextual. "Good" depends on workload, threat model, maintainability horizon, user risk, team skill, and budget. A model cannot guess those constraints if you cannot articulate them, and it will often guess things that look right but are wrong, often only when it's too late, like when presenting to stakeholders or doing a demo.
So yes, AI raises the floor for output speed. It does not remove the need for people who can articulate problems and evaluate solutions. The depth of understanding often lies outwith the purview of what an AI can access: your brain, your experience, your judgement, and conversations with your colleagues.
The strongest people in this new environment are not anti-AI and not AI-maximalists, they are experts in coding literacy bringing deep understanding and critical thinking to their work, using AI as a tool to amplify their capabilities.
They can reason from first principles and leverage tools; they can not just prompt engineer, but also make use of debugging, testing, and code review; they can use AI to explore solutions but also know when to step back and question the output, especially when context drift starts pulling results away from the original goal.
A well built structure requires a good builder and a knowledgeable architect.
This means they are not just consumers of AI-generated code, but active participants in the software development process, ensuring quality, robustness, and alignment with project goals. They also are making use of non-gimmicky AI advancements, tools, and techniques to enhance their work. It's tough moving swiftly with AI advancing so quickly, making sure not to step on landmines of security issues and overblown expectations in new tooling, and finding useful, reliable frameworks to use.
Yes - more than ever, but the landscape of learning has shifted suddenly and drastically - both in how to learn and how to apply that learning. The focus is no longer on memorizing syntax or mastering a specific language, but on developing a deep understanding of problem-solving, system design, and critical thinking skills that can be applied across various tools and technologies.
I recommend focusing on foundational concepts, practicing problem-solving, and engaging in real-world projects that challenge you to think critically. Find problems to solve that push you to apply your knowledge and adapt to new tools and technologies.
That is how you stay useful in an AI-heavy world: not by competing with the machine at token speed, but by being the person who knows what a good solution looks like, but more, how to understand a problem at its core, communicate effectively and bring the soft skills necessary to collaborate and lead in complex environments.
To put it succinctly:
Learn to code to: (1): learn to lead, (2): learn to deeply understand problems, and (3): learn to know when solutions solve problems, and don't just give the appearance of being solved.
Yes. Even basic coding experience improves how you scope work, test assumptions, and communicate with technical teams. The reasoning habits are portable.
Is there such a thing as learning "enough" in any field? Your goals are your own, and the amount of coding you need depends on what you want to achieve. Focus on developing a strong foundation in problem-solving, system design, and critical thinking, and use coding as a tool to apply these skills effectively.
That depends on your problem. Software engineers and data scientists have different types of problems to solve, and the tools, language, framework and environment you choose should align with the specific challenges you face. Start by understanding the core concepts and gradually build your skills through practical experience.