Why the Best Code may be the Code You Never Write

September and back-to-school season reminds us of new beginnings and the excitement of learning. With that same spirit, we are delighted to share updates with the Azlin Software community.

Our Co-CEOs Annalise and Zoe will be attending the ASRM conference next month as we continue exploring the fertility software space. We look forward to connecting with many of you there! Shortly after, the full Azlin Software team will gather together in Austin.

In the spirit of starting a new school year, we are highlighting a post from Azlin Software's Head of Engineering, Evan Dragic, on AI coding tools and best practices. Evan holds dual degrees from Stanford University and brings over a decade of experience in software engineering. He shares what may at first seem counterintuitive but in the world of AI-assisted coding the best code is often the code that never needed to exist at all!

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should!” is Dr. Ian Malcolm’s famous condemnation of Jurassic Park. Companies leveraging generative AI to write software aren’t cloning deadly Velociraptors, but they can be exposing themselves to complexities and future costs that increase the likelihood of their software products going extinct.

Even as creating code becomes easier, the best, most future-proof code remains no code at all. New code is information that must be managed to keep systems working correctly. The simpler the set of code that is needed for a system to work, the easier that system is to maintain and to extend. Almost all code we write will be rewritten, deleted, or replaced. This task is usually more difficult than writing new code. There are myriad new tools to help with creating and revising software, but it’s vital to take steps to mitigate complexity and promote maintainability when using them. 

AI-Assisted Development: The Vibes Are Good?

Generative AI tools are able to produce and revise large swaths of code with extreme speed. Huge companies like Microsoft and Google have proclaimed that over a quarter of their code is created by AI. There’s been great excitement for “vibe-coding” tools that promise to build websites and applications quickly with natural language prompting. For example, at Azlin Software, we use coding completion and targeted refactor assistance through Cursor during development of our internal systems. We’ve found Lovable a great tool for creating shareable mocks. Similar benefits are available from developer tools like Claude Code or other coding agent offerings like Replit or Bolt.

Vibe-coding tools let you express needs in natural language and have software produced to satisfy them, but this comes with tradeoffs. The best code is no code at all, and such a “no code” approach feels much simpler. Being able to program at the abstracted level of intent is immensely empowering! But abstractions have limits, and are leaky. They mask complexity, but sometimes addressing underlying issues requires deeper understanding. When the complexity “leaks” past that simplification, the simpler framing stops helping you.

For an example of this leakiness, if your AI-generated code is repeatedly crashing or erroring despite different attempts at requesting corrections, you likely have to read and reason about the code itself. The original tweet definition of vibe-coding proposed overcoming these moments by simply starting again when AI was stuck. Still, even that was in the context of “throwaway weekend projects.” A full rewrite to solve problems isn’t a viable solution in a complex system; nor is blindly iterating over and over. Instruction over code can break down even when just using AI to augment development. 

Developer assisting tools like Claude Code and Cursor provide a broad range of ways to use AI to help write code. You can use them to:

  • Autocomplete code you are typing

  • Ask questions about sections of existing code

  • Fix specific errors in pieces of code

  • Author new files or functions following existing patterns

  • Fully implement new functionality or features

 The farther down the list you go, the more similar to vibe-coding you get. The tradeoff of speed and mental effort can still create hidden complexity. This is especially true if the AI has been asked to do something because a developer is less familiar with a good way to approach the solution.

Up to Spec: The Importance of Artifacts

The emerging, more formalized process of spec-driven development is a method of steering AI coding via rigorous documentation in tools like Amazon’s Kiro. You could think of this as a way to “program” vibe-coding. This method uses structure and process to codify intent and ensure bigger picture understanding of what an AI should be helping create. It doesn’t eliminate leakiness. Still, spec-driven development correctly emphasizes the importance of artifacts that go into the creation of code. 

The specifications that launch work, implementation designs that precede the creation of code, and even review comments on the code itself are all very useful. Collectively, these artifacts ensure a more correct and useful initial implementation. 

They also make it easier to re-understand the code they helped produce. We will often use version control to find relevant changes when trying to debug issues. Captured code review conversations help gain context on areas of complexity. When code is generated via instruction to an LLM, such documentation and feedback can remain undiscoverable in that individual chat context. That hinders future investigation. When using generative tools for the creation of software, be concerned about the degree to which that artifacting and feedback is left undiscoverable for future maintainers.

Strategies for Future Success

Another concern is the tendency to accept AI-proposed changes when presented. Code that works and is already written is easy to say yes to! But if “it works” is the only evaluation, complexity and future time costs can be a big problem. To solve bugs and extend features, code will need to be re-understood, whether by a human or a machine. There are limitations to how self-documenting code can be. As newer LLM models have been released, they have tended to create solutions with more lines of code, and higher cyclomatic complexity (a metric for how difficult a set of code is to understand and reason about). Large complex changes are hard to review. They are also more difficult to read in the future, and more expensive to re-process through an AI model. 

Keeping in mind the best practices for code review can help guard against complexity. Prefer small change sets that do one thing. This makes causes of bugs easier to identify, and reduces the likelihood of inadvertent side-effects. We have also found it useful to leverage AI coding assistants more actively when there are clear models in the codebase. Patterns for similar business logic or well defined UI components can help mitigate tendencies to complexity. Companies like Coinbase have seen differences in AI adoption and impact across engineering domains that match with our experience. Instruction files like a Claude.md or Cursor rules can allow you to specify:

  • How to comment and document code

  • Code that AI agents are not allowed to alter

  • Conventions to follow

  • Proper use of internal modules and libraries

Such guardrails all help make the code an AI assistant produces easier to revisit in the future.

With Great Power Doesn’t Necessarily Come Great Maintainability

Kernighan’s Law states “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?” Software created with LLM assistance or directly by LLMs will still need to be maintained, and thus re-understood (by people and supporting automation). Being mindful of that reality, and working to prepare for it, are necessary even in a world where crafting new code is easier than ever.

Further Reading

About Azlin Software

Azlin Software acquires and supercharges modern, niche software companies. We hold for the long term: supporting your team, product, and mission without ever planning to sell.

When partnering with business owners, we look for:

  • Shared values and a foundation of trust

  • Vertical software in a protected niche

  • Modern, cloud-native tech stack

  • $2-5 million in ARR with clear product-market-fit

  • Profitable or close to breakeven

  • Highly recurring revenue and loyal customers

If you or someone in your network is thinking about selling or investing in their B2B software business, we’d welcome the chance to connect and learn more.

Received this newsletter from a friend? Click the button below to subscribe and keep up with our updates!