From Vibes to Vision, Engineering is evolving fast
Why AI-native engineering teams need context, not just code generation.
Hiring interviews are evolving, and so must how we build engineering teams. This is even more crucial when you're building a startup. Early technical hires have an outsized impact, and the ability to partner effectively with AI can be a force multiplier. As AI becomes a core part of the engineering toolkit, it's no longer enough to hire great coders. We're looking for AI-native builders who can collaborate with models as partners in development.
February 2nd, 2025:
This shift means reshaping both how we assess talent and how we build high-performance teams. In interviews, we're prioritizing four key attributes that signal a candidate's potential to thrive as a context engineer:
Systems Thinking – Can they break down a problem into structured components? Do they think in terms of architecture, flows, and dependencies?
Documentation Fluency – Can they communicate requirements clearly in structured formats like specs, markdowns, and PRPs?
Tooling Intuition – Are they familiar with how different AI tools behave and how to structure input/output to avoid hallucinations?
Iterative Mindset – Do they show a pattern of validating, refining, and debugging AI-generated work rather than taking outputs at face value?
These are now front and center in our technical interviews. We’ve shifted away from toy coding challenges to tasks where candidates must plan and scaffold projects using an AI assistant, showing how they build context upfront.
June 25th, 2025:
What Killed Vibe Coding?
Remember when "vibe coding" felt like magic? You'd throw a vague prompt at your AI assistant, and somehow—maybe, probably—it worked. It was fast, fun, and frictionless. But like all tech fads, the limits showed up as soon as you tried to scale.
The dopamine hit of code generation wore off when hallucinations, bugs, and brittle outputs made it clear: intuition doesn't scale, structure does. That realization is ushering in a more deliberate, powerful approach - Context Engineering.
The Problem
The core flaw in vibe coding is context—or the lack of it. Developers new to AI tooling often treat LLMs like omniscient wizards. In reality, these models are highly dependent on what you feed them. Without proper instructions, examples, rules, and documentation, even the best LLMs fail to deliver robust outputs.
A recent Codto study found that over 76% of developers lack confidence in shipping AI-generated code without human review. That stat alone should be a wake-up call. AI doesn’t replace engineering diligence; it magnifies the cost of skipping it.
From Prompting to Engineering
Prompt engineering taught us to tweak inputs for better results. Context engineering goes ten steps further. It's not just about phrasing a clever question—it's about feeding the model a full ecosystem of information:
Global rules and coding conventions
Example inputs and outputs
Existing documentation and APIs
Memory of past actions
Task breakdowns and planning docs
This is the difference between asking ChatGPT to "build a to-do app" and supplying it with a project blueprint, complete with tech stack, architecture, and test specs.
Lessons From the Field
Early adopters in the agentic coding space—using tools like Claude Code and structured command templates—are showing what's possible. With well-defined context, AI can plan, scaffold, build, test, and iterate on projects with minimal supervision.
One standout example is the use of PRPs (Product Requirements Prompts). Think of these as PRDs, but purpose-built for AI. They're markdown files that specify the feature, edge cases, dependencies, and gotchas. When paired with global rule sets and examples, PRPs dramatically reduce hallucinations and increase output quality.
Why This Matters for Engineers
Context engineering isn’t some niche skill. It’s becoming a core competency for modern developers. If you're a builder without a data science background, this is your bridge to wielding AI effectively.
It also reframes your role: from prompting a chatbot to architecting the environment in which AI can succeed. That shift is not only more powerful—it's more scalable, reliable, and future-proof.
Key Takeaways
Vibe coding was a helpful prototype phase, but it breaks under production pressure.
Context engineering treats instructions, examples, and structure as engineered resources.
Providing rich, structured context leads to dramatically better AI performance.
Tools like Claude Code and PRP templates are paving the way for agentic, end-to-end AI development.
As Andrej Karpathy put it, context engineering is the art of making a task plausibly solvable by an LLM. It’s not magic, but it might be the closest thing we have to it—if we give it the right inputs.