1. Introduction
I’ve been writing code professionally since the early 2000s. That means I’ve lived through the silent shift from thick MSDN books on my desk to code snippets streaming in real time from a language model.
Back then, building something meant understanding the problem, designing a solution, and writing every line of logic yourself. And when it broke, and it always did, you stayed with it. Sometimes for hours, sometimes days. You debugged not just the code, but your own assumptions. You combed through forums, read scattered mailing lists, reverse-engineered undocumented behavior. You didn’t ask a model. You asked the void and hoped someone, somewhere, had already hit the same wall.
Even having an internet connection wasn’t always a given. Earlier still, programming meant relying on physical documentation like books, printed manuals, the occasional help file. Imagine trying to decode a COM interface without Google.
I’m not telling this as a badge of honor. It’s just to say: I know what it meant to build software when every step, from concept to fix, had to come from you. And because of that, I also feel the tectonic shift we’re in now more clearly.
Today, we build differently. We work with AI. And not just in the abstract sense of “tools are changing.” No, the very act of writing code is being reshaped.
This post isn’t a rant. It’s not a nostalgic lament. It’s a reflection on what it feels like to move from one era of development into another and what we risk misunderstanding if we treat that shift as mere laziness or decline. There’s something subtler going on. Something worth thinking through.
So let’s do that.
2. What Is “Vibe Coding”?
Let’s get one thing out of the way: “Vibe Coding” is not a rigorously defined term. There’s no IEEE paper on it, no canonical checklist. It’s a phrase that floats around developer forums and social media threads, often with a smirk or a sigh.
At first glance, it sounds harmless, even playful. But usually, it’s used to describe a particular kind of coding behavior: someone throwing together code snippets based on vibes rather than understanding. Think Stack Overflow copy-pasting, ChatGPT-driven guesswork, or blind trial and error until the program somehow runs.
The implication is clear: this isn’t real programming. It’s cargo cult coding, driven more by intuition and convenience than by comprehension. The stereotype carries an air of disdain: for junior developers, for bootcamp grads, for anyone “cheating” their way through code they don’t fully grasp.
But here’s the catch: it’s not that simple.
“Vibe Coding” isn’t binary. It’s a spectrum. At one end, yes there are those who cobble together barely functional programs without understanding what they’re doing. That’s a risk. But at the other end, there’s something more interesting: experienced developers using their intuition, past knowledge, and smart tooling to accelerate their workflow. They still care deeply about structure and clarity. They just don’t fetishize typing every semicolon by hand.
You might be thinking: isn’t that just modern development? It is but it doesn’t always look like it.
And that’s the friction. When does smart delegation turn into shallow hacking? When does intuition become irresponsibility? The term “vibe coding” doesn’t settle the matter. It merely points to a tension. And that tension, I’d argue, deserves a closer look.
3. My Current Practice: Human–AI Pair Programming
Let me give you two snapshots from my own work. Both are recent. Both are routine for someone with my background. And both, if you looked at them from the outside, might look suspiciously like vibe coding.
First: I’m setting up a REST API using FastAPI. It’s hardly my first web service. Over the past 20+ years, I’ve built APIs in .NET with or without WCF, in PHP, Java, and more recently in Python. I know the protocols, the architectural patterns, the trade-offs between synchronous and asynchronous handlers, the implications of status codes and serialization formats.
So here’s the question: why should I write every route handler by hand, line by line, just to prove that I can? When AI can scaffold it for me in seconds (based on my own spec) why would I turn that down? What does it prove to retype the obvious?
The second example is a bit more demanding: I’m prototyping a deep learning model in PyTorch. This isn’t my first either. I’ve trained and fine-tuned models before, understand the underlying math, and I’m well-versed in concepts like batch norm, dropout, and overfitting. And I know the typical building blocks like transformers, gating, attention, CNNs etc.
So again, what exactly do I gain from writing def forward(self, x): by hand if the rest is boilerplate? If I already know what the layers should do, what loss function fits, and how to evaluate performance. Is it less legitimate to have AI generate the scaffolding? Where’s the flaw, exactly?
Here’s how I actually work:
I define the goal, constraints, and structure. I let the AI do the mechanical generation: the first draft. Then I validate it, refactor what feels clumsy, and integrate it into the larger architecture. If it’s unclear or questionable, I dig in. If something breaks, I debug it myself (yes I debug and look at the numbers). This isn’t delegation in the blind. It’s recursive refinement with full understanding.
And that, I would argue, is no different in spirit from how seasoned developers have always worked: focusing their energy on the parts that matter most, the parts that require actual thinking. Whether your assistant is a junior colleague, a code generator, or a language model, the responsibility still lies with you.
It’s not about typing. It’s about owning the result.
4. Why It Still Feels “Off” Sometimes
Let’s be honest: even when it works, the code is clean, the design sound, the AI assistance well-used. But something about this new mode of development still feels… off. Like I’ve skipped a step. Like I’ve cheated.
I’ve caught myself feeling it. That strange sense of unease after generating a full scaffolding in seconds. The code runs, the tests pass, the structure holds and yet a part of me whispers, “You didn’t really build that.”
Where does that voice come from? A question I asked my self for some time.
Partly, it’s internalized from years, or even decades, of a different ethos. One where the act of typing was tied to craftsmanship. Where “real devs” knew their tools inside out, avoided shortcuts, and prided themselves on minimal dependencies. Where mastery was visible, line by hand-written line.
You learned to build by breaking things. By fixing them. By writing the wrong code and understanding why it failed. That wasn’t just a rite of passage. It was how you proved, to yourself and others, that you belonged.
And now? A newcomer with three months of experience can generate a working API in less time than it took you to configure your IDE back then.
But the tension isn’t just personal. It’s also cultural.
Media narratives still frame AI-assisted coding as either a magic shortcut or a threat to the profession. On one end: “Look what GPT-4 can do!” On the other: “Will developers become obsolete?” In both cases, the human is made to feel secondary. Either outpaced or displaced.
And so we experience a kind of cognitive dissonance. We’re working smarter but it feels like we’re skipping the work. We’re producing more but it feels less earned.
But here’s the uncomfortable truth: that feeling isn’t a sign that something’s wrong. It’s a sign that the norms have shifted faster than the narratives.
Maybe the real challenge isn’t how we code but how we think about what coding is.
5. The Flawed Premise of the Stigma
The unease around AI-assisted coding rests on a quiet assumption: that real developers write their code by hand.
Let’s examine that.
If you believe that mastery is expressed through keystrokes, then yes, generating code with an AI must seem like a shortcut. A betrayal, even. But that premise has always been flawed. Even before AI, we used frameworks, libraries, IDEs with autocompletion, scaffolding tools, code linters. We didn’t write everything from scratch. We wrote what mattered.
And more importantly: we took responsibility for what we shipped.
Mastery in software development isn’t about typing. It’s about design, understanding, and judgment. It’s about knowing what to build, why it matters, and how to make it resilient, maintainable, and clear to others. The hands are secondary. The mind leads.
We don’t question a pilot’s skill just because the plane is on autopilot. Or at least, we shouldn’t. But the stigma exists there too.
I know this from experience. As a flight simulator enthusiast, I’ve flown both styles. Small aircraft, no automation, every movement felt through the stick and rudder. It is visceral, demanding, precise. The body is fully engaged. Every gust of wind is your problem.
But I’ve also flown the heavy iron. Large commercial jets with complex flight management systems, autopilots, automated checklists. Here, the pilot’s role is different. You don’t fly the plane by muscle memory. You command it. You manage systems, anticipate failures, program flight paths. You work with a machine that extends your capability far beyond what hands alone could manage.
And yet, the question lingers: if the plane is flying itself, are you really the pilot?
The answer is yes! Because the essence of piloting, like coding, isn’t just in the manual act. It’s in the decisions, the foresight, the ability to diagnose, to intervene when needed, to know when to trust the system and when to override it.
Flying a modern airliner without those systems would be reckless. It’s not bravery, it’s poor judgment. The same holds in development. Insisting on doing everything manually, when better tools are available and well understood, isn’t craftsmanship. It’s nostalgia.
Both forms of flying are demanding in their own way. So are both forms of coding. Mastery is not in rejecting tools, but in using them without losing yourself in them. Not abdicating, but orchestrating.
Let’s get back to coding:
We don’t think less of an architect because they didn’t lay the bricks themselves. In both cases, we understand: the expert’s value lies in decision-making, oversight, and accountability.
The same is true in code.
The danger is not that developers are using AI. The danger is when they stop thinking critically because AI seems to be doing the thinking for them. When output is accepted without inspection. When errors are patched blindly. When understanding is deferred until something breaks.
The problem isn’t automation. It is abdication.
So if there’s a stigma around “vibe coding,” we should be asking: is the issue really the tool, or is it the absence of thought behind its use? The premise that using AI is inherently inferior collapses when we reframe the question from “who typed it” to “who understands and owns it.”
Interlude: How Vibe Coding Is Perceived by the Public
Step outside the developer bubble for a moment, and you’ll find that vibe coding has become a kind of shorthand: not for a technique, but for a stereotype.
In the public eye, it goes something like this:
- Every developer using AI must be “vibe coding”
- If you’re vibe coding, you’re not really developing
- And if that’s true, then why not replace entire dev teams with cheaper labor or even with the AI itself?
This view has traction. Some companies have already started acting on it. Mass layoffs, hiring freezes, a quiet recalibration of what “technical” work is worth. The implicit claim is that coding has become so democratized, so automated, that it no longer requires deep expertise. Anyone with an AI assistant and enough prompt finesse can build software, right?
Well — no.
This is where the narrative breaks down.
Yes, AI has lowered the barrier to producing code. That’s undeniable. But producing code is not the same as building systems. It’s not the same as ensuring reliability, maintaining quality, making trade-offs between performance and readability, knowing how to scale, when to refactor, or how to handle failure under pressure. Those are the things that don’t show up in a demo, but break in production.
The term vibe coding is often tossed around as if it captures some essential truth. But most of the time, it’s a superficial diagnosis. A way to signal suspicion without investigating context. It flattens a spectrum of practices into a punchline.
In doing so, it misses the most important distinction: between those who delegate with understanding, and those who delegate without it. That difference is invisible in the output but it is crucial in the outcome.
6. The Developer as Curator, Not Just Creator
There was a time when being a developer meant being a creator in the most literal sense. Every line was hand-written, every structure manually assembled. That approach taught discipline. It also created a strong emotional link between the code and its author.
But today’s developer often operates in a different mode. One that looks less like a sculptor and more like a systems designer or, more fittingly, a curator.
A curator does not produce every artifact. They select, arrange, contextualize, and elevate. Their judgment defines quality. Their responsibility lies in coherence, not raw creation. The same is now true for developers. You define the architecture, the standards, and the boundaries. You decide what is automated, what is reviewed, and what must be rewritten. The act of coding becomes recursive: generate, test, revise, integrate.
It’s not a step down. It’s a shift in focus: from typing to thinking.
To make this clearer, let’s stop treating all AI-assisted developers as one group. There’s a real gradient in how developers work with AI, depending on their mindset and skill level. Here’s a more accurate “classification”:
| Developer Type | Key Traits | AI Usage | Risk Level |
|---|---|---|---|
| Scripter | Follows examples blindly | High | High |
| Assembler | Glues code together with partial insight | Moderate | Medium |
| Integrator | Understands architecture, configures tools | Smart use | Low |
| Synthesizer | Designs, validates, iterates with AI | Strategic use | Very Low |
This is not about hierarchy. It’s about responsibility.
Where do the Developer Types come from? Well, I was inspired. They are not standardized types, just a paraphrase. A metaphor, so to speak. It is important that those terms can express a distinction.
The scripter may get code to run, but rarely understands its structure. The assembler is beginning to connect the dots. The integrator makes thoughtful decisions and understands the moving parts. And the synthesizer does what a senior engineer does best: moves fluidly between design and implementation, using AI not as a crutch but as a co-pilot.
If you’re working with AI, the question is not whether you typed the code. It’s whether you own it. That’s the curator’s stance: discerning, accountable, and deliberate.
8. Final Thoughts: Embrace Evolution, Reject Stagnation
I didn’t plan to become an “AI-assisted developer.” It just happened naturally, as the tools matured and my projects evolved. What began as cautious experimentation turned into daily practice. Today, I would not want to go back.
Using AI in development is not about laziness. It is about leverage. When I can build faster without sacrificing clarity, when I can delegate boilerplate and focus on structure, when I can iterate ideas quickly and get feedback in seconds: that is not cheating. That is growth.
And yet, there is still a strange pressure to justify it.
It is disappointing, even a little sad, that experienced developers now find themselves defending the use of a tool that clearly improves productivity and quality. Not just to outsiders, but to peers. There is a kind of gatekeeping at play. Some developers dismiss AI assistance as a gimmick or worse, as a sign of incompetence. As if real mastery must always look like effort. As if tools somehow diminish understanding instead of enhancing it.
But progress has never waited for permission.
Tools have always shaped the profession. The compiler. The debugger. The IDE. Git. Docker. These were not signs of decline. They were catalysts. They allowed us to move up the abstraction ladder and solve harder problems. AI is simply the next step.
The question is not whether the landscape has changed. It has. The question is how we respond. You can resist the shift, cling to older rituals, and hope the tide turns. Or you can adapt, refine your judgment, and embrace the expanded role that real development now demands.
Ownership still matters. Understanding still matters. But the way we express those things is evolving. And pretending otherwise does not make you a better developer. It only makes you slower to catch up.
I choose to move forward, not because I reject the past, but because I’ve lived it. And I know what it costs to stand still.
Appendix: The New Developer Dilemma
There is a quiet question that keeps surfacing beneath the excitement about AI in software development. It is not about tools or productivity. It is about formation.
How do we grow the next generation of developers, of system architects, of responsible engineers if the path no longer includes the struggles that shaped us?
In the pre-AI era, learning to code meant wrestling with problems. It meant broken compilers, missing semicolons, confusing error messages, and the long search for an elusive bug. It meant working for hours to understand a concept, then finally making it work. Not because someone gave you the answer, but because you fought your way to it.
That process was frustrating. But it built something deeper than syntax. It built intuition. It built resilience. It trained the mind to ask better questions, to look beneath the surface, to understand.
Today, AI can offer answers almost instantly. Code that works. Suggestions that are often correct. Explanations that bypass the dead ends. It feels like magic. Sometimes it is.
But there is a risk here. If you start with AI, how do you know what you are skipping? How do you build judgment when everything is served on a silver tablet?
Some say: the old rituals are no longer necessary. That we should not force people to suffer through the past just to earn their place. And maybe they are right. Maybe it is unfair to romanticize hardship when better tools now exist.
But then the real question becomes: what replaces that formative struggle? What new kind of discipline, curiosity, and responsibility should we cultivate, if we no longer rely on long hours and hard bugs to shape developers?
I do not have the answer. Not yet. But the question needs to be taken seriously. Because good engineers are not just defined by what they can make. They are defined by how they think, how they learn, and how they respond when the tools fall short.


Leave a Reply
You must be logged in to post a comment.