Peter Steinberger describes a transformative shift in software development efficiency, where the speed of delivery is primarily bound by the capabilities of the models themselves. The amount of software I can create is now mostly limited by inference time and hard thinking.
This shift has normalized what was once considered exceptional performance. Whereas in ~May I was amazed that some prompts produced code that worked out of the box, this is now my expectation. I can ship code now at a speed that seems unreal.
Insights
A key realization is that the complexity of most software is often overestimated. Most software does not require hard thinking. Most apps shove data from one form to another...
In this AI-driven landscape, the role of the engineer shifts towards high-level architectural decisions. The important decisions these days are language/ecosystem and dependencies.
The development process becomes inherently exploratory, relying on interaction with the tangible product to guide its evolution. My approach to building software is very iterative. ... Rarely do I have a complete picture of what I want in my head. ... I need to play with it, touch it, feel it, see it, that’s how I evolve it.
This iterative journey is less about a direct path and more about navigating towards a solution. Building software is like walking up a mountain. ... it’s imperfect, but eventually you get to where you need to be.
Practices
To facilitate agentic interactions, starting with a command-line interface provides a reliable feedback loop. whatever I wanna build, it starts as CLI. Agents can call it directly and verify output - closing the loop.
Simplifying the git workflow helps reduce cognitive load, favoring a linear history. I simply commit to main. ... I find the added cognitive load of having to think of different states in my projects unnecessary and prefer to evolve it linearly.
Leveraging existing codebases as context for new tasks significantly enhances efficiency. I cross-reference projects all the time... I ask codex to look in ../project-folder and that’s usually enough for it to infer from context where to look.
Maintaining up-to-date documentation is crucial for guiding the model. I maintain docs for subsystems and features in a docs folder in each project, and use a script ... to force the model to read docs on certain topics.
Structuring the codebase for AI agents rather than just humans is a strategic advantage. I don’t design codebases to be easy to navigate for me, I engineer them so agents can work in it efficiently.
Visual feedback remains a powerful tool for correction, especially in UI development. If you show the model what’s wrong, just a few words are enough to make it do what you want... I often type again, and many times I add images, especially when iterating on UI