I’m Not Using AI

April 15, 2026

Today at lunch, a colleague asked “which AI tools are you using?” I was the only one at the table who said “none”. When you are outnumbered 7 to 1 on something, it is worth at least thinking through why that is, and this blog post is my attempt at doing so. The point is primarily for me to articulate why I’m stubbornly resisting something which smart people are excited about, but perhaps it is interesting to other people, and could perhaps even convince someone to not overly rely on AI.

I am starting this post off right in the middle, with a paragraph that comes later:

Everything I’ve said so far is only really relevant for big long term projects, but it doesn’t really apply to a throwaway script, right? No, not in my mind, and it comes back to this “reading is not understanding” point. In my career so far, I have benefited immensely from being a jack of all trades. Surfer exists because I am a hardware-adjacent person who knows how to write a GUI program. The architecture of Surfer that has enabled numerous cool features in the project is based on the Elm architecture which I only know because I tried a weird programming language for building websites 7 years ago. One of my PhD students is working with WebAssembly, and most of my knowledge of WebAssembly comes from setting up the Spade playground. Over the years, time and time again, I have learned things in side projects that have become invaluable in other projects. Many of these side projects would be perfect candidates for AI. But had I used AI, I would only have a few websites that I don’t understand fully, not the knowledge I gained from building them.

This is perhaps my best argument for myself to not use AI for writing code, I would not be where I am today had I relied heavily on AI in the past 10 years. But that’s not all, there is a meta-point here as well! When I set out to write this blog post, it is not a point I had considered at all, it came to me while writing because writing forced me turn things around in my mind, to really think through things in a way that an AI writing for me would never have done.

With that out of the way, let’s get back to how the post continued initially

AI in Open Source

First, I put a no-AI contributions policy in place for Spade, you can read it if you want more motivation but my personal main reason for doing so is what it means for code review.

The Rewards of Code Review

Spade is not a big project, I have a few regular occasional contributors, and even then, code review is now a significant amount of work for me. I’m not complaining, of course, this is a success, but that is because the rewards for doing code review are worth it.

Of course, you get a new shiny feature, or a bug fix, or new documentation. Here it doesn’t matter much if you review code from AI or from a human, if the code is good enough to be accepted it shouldn’t matter who wrote it, at least in principle.

However, code review serves has an important social aspect of open source. If a new contributor shows up, it is your chance to teach this new contributor about the project. You can point out things they may have missed that they can use to better implement their feature. You can teach them what the preferred code style is, how things are usually structured etc. The reward of doing this is not just the feature under review, it is a potential future contributor who can come back later with bigger changes that are more aligned with existing style.

Reviewing an AI contribution gives the code, but the next time around, the AI will not have learned, it will not have a greater understanding of the code base.

Accountability

Actually no, I hate that word, it sounds like there is someone to blame, let’s call it explainability.

When I want to make a change to my project I often come across code that I can’t immediately understand, and I have to try to understand why it is there. If I wrote the code myself, I hopefully remember, or can at least remind myself why I wrote the code like that. If a contributor wrote the code, I can ask them, and there is at least a chance that they will remember. If an AI wrote the code, there is fundamentally no way to answer, the reasoning, if there was one in the first place, is inaccessible after the fact.

Project Culture

Bryan Cantrill has many great talks but one of the most impactful to me is this one. There, he argues that when choosing a programming language, you not only choose the features that the language currently has, but more importantly, you choose based on your values. I like Rust, this is not just because it has memory safety, and enums, and cargo. I choose it because I value correctness, I prefer spending time working with the compiler to fix compiler errors over debugging issues at runtime. I chose it because I value tooling that is built not just to accomplish a task, but to be helpful to the user driving the tool.

The current set of Spade contributors is leaning towards AI skepticism. It therefore makes sense to have an AI skeptical policy, because it signals to potential other contributors where we stand. In fact, just last week we had a new contributor in the discord say “no sweat, I can understand where you’re coming from. one of the things that made me more excited about contributing was the statement against LLM usage in the main README”. Not only is this a new contributor which is always nice, but someone who is now more likely to align with the rest of our values than a contributor who is turned away by the policy.

I am Not Using AI

Those are our arguments for the Spade project to not use AI, however, only the accountability point can be used to argue for me personally not using AI. If I’m being honest, the primary reason I am not using AI is probably that it simply does not sound fun to me. I like programming because it is a fun challenge to work through a problem and to find a pleasing solution. I like understanding why things work, not just that they work.

With AI, you are not the one solving the problem, at least not directly. You choose the problem and check if the solution is good, but to me, that is not where the fun lies. In addition, it looks to me, like effective “vibe coding” today still requires programming, but that programming is writing markdown files where you try your best to explain your project to your AI. This to me sounds like hell. You are trying to nudge a system which you do not fully understand into solving a problem which, at that point, you also do not fully understand.

Maintainability

I also question the long term maintainability of AI written projects. From what I have seen, AI can produce lots of code really quickly. Making a +2000 line change to a relatively complex project is no problem. However, programming is not only about adding lines, long term maintenance of a project is about refactoring, identifying parts of the code that can be restructured, broken out into common functionality etc. Refactoring is rarely fun, and to know when and where to refactor, it helps to feel the pain of adding features. When you get tired of repeating yourself, you should consider refactoring, not just to avoid repeating yourself, but to make future changes easier.

AI, from what I’ve seen looking over the shoulders of people using it makes it easier to repeat yourself, now you can have 10 copies of a line of code before you go “I should break really this out” instead of just 3. But of course, when you now have 10 copies, it is much harder to fix the problem. Technical debt accumulates.

Patching a Symptom vs Solving a Problem

Last month, someone reported an issue in the Spade compiler. A code generation bug that caused Yosys to emit an error on the generated Verilog code. These are always hard to debug, both for users to work around and us maintainers to fix. In this case though, the user who reported the issue also asked Claude to fix it and posted the result in the issue. Claude was spot on, it pinpointed the line where the issue was and suggested a correct fix. This is extremely impressive, I estimate based on debugging similar issues that it would have taken me at least an hour to narrow it down to the root cause, probably more.

Let me briefly explain the issue. During alias flattening, we have to run a replace_alias function on every value in the code to be generated. This is done in a few match statements, and the offending issue was in the handling of registers. It looked like this:

  Register(reg) {
replace_alias(reg.value, aliases);
}

The problem is that registers have more fields, namely a rst field which also requires a call to replace alias on its values if present. This is the solution that claude suggested

  Register(reg) {
replace_alias(reg.value, aliases);
if let Some((rst, value)) = reg.reset {
replace_alias(rst, aliases);
replace_alias(value, aliases);
}
}

And it is correct, it fixes the problem. However, with some understanding of the code, you will realize that there are more fields missing. reg also has a clock field which needs alias replacement. So Claude would have patched this bug, but not a rarer bug that would come back to bite us later.

There is also a deeper underlying issue here. If we add fields to the register construct, we have to remember to go into this obscure file and add this replace call every time! I certainly won’t, clearly I forgot about it for two fields already. But, with just a bit of refactoring we can prevent that issue in the future: list all the fields, and let the compiler complain when we missed some:

  Register(Register{value, reset, clock, _type: _}) {
replace_alias(value, aliases);
if let Some((rst, value)) = reset {
replace_alias(rst, aliases);
replace_alias(value, aliases);
}
replace_alias(clock, aliases);
}

Now we not only fixed the symptom, we fixed the same bug in a slightly different condition, and we made sure that we will avoid this bug in the future. Is this a fix an LLM would have come up with? I’m doubtful, Claude clearly didn’t despite this pattern being everywhere in the Spade code base

Of course, another takeaway here is that perhaps LLMs should be used more for tracking down bugs even if we don’t use their output in the code, I’ll get back to that.

Reading is Not Understanding

It is well known that in order to truly understand something, you cannot simply have it explained, you have to work through it, turn it around in your mind, find its limits etc. Code review is having something explained; you read the code, you understand it at a surface level, but you may miss small but important details, or large scale problems that are only apparent if you take a step back from the individual lines you look at.

When I write code myself, I not only get a new shiny feature, but I get a new understanding of the problem. This is something I can take away to my next feature, or perhaps to start refactoring the rest of the code with this new understanding.

LLMs for Throwaway Code

Everything I’ve said so far is only really relevant for big long term projects, but it doesn’t really apply to a throwaway script, right? No, not in my mind, and it comes back to this “reading is not understanding” point. In my career so far, I have benefited immensely from being a jack of all trades. Surfer exists because I am a hardware-adjacent person who knows how to write a GUI program. The architecture of Surfer that has enabled numerous cool features in the project is based on the Elm architecture which I only know because I tried a weird programming language for building websites 7 years ago. One of my PhD students is working with WebAssembly, and most of my knowledge of WebAssembly comes from setting up the Spade playground. Over the years, time and time again, I have learned things in side projects that have become invaluable in other projects. Many of these side projects would be perfect candidates for AI. But had I used AI, I would only have a few websites that I don’t understand fully, not the knowledge I gained from building them.

Vendor Lock In

I don’t like proprietary software. I do not like companies. Their values can shift in a way that humans rarely do. Time and time again I’ve been close to giving companies the benefit of the doubt and buying into a product that I disagree with, and then been proven right a while later. I almost got an Amazon Alexa, and a few months later stories came out that they stored way more data than they said. Google are locking down android more and more with every release.

AI is expensive, it is currently subsidized by hype and companies trying to expand to take over the market. When the hype dies down, things will become more expensive and locked down. When that happens, I do not want to sit with a code base that can only be effectively worked in with the help of AI.

Where AI Can Be Useful

Everything I’ve said so far has been about AI producing code, and the same applies to text. There is a very good reason for me to write this with my own hand and brain; it is my excuse to think things through!

However, that bug that Claude tracked down was clearly a case where Claude was faster than I would have been, and the debugging process itself would probably not have contributed much to my understanding of the project. I am much more open to using AI for bug finding than for producing code, perhaps I will experiment more with it.

At the end of writing my thesis, I ran the document through Microsoft copilot several times, asking it to identify obvious issues with the language. It worked! It found a bunch of typos, grammatical errors, awkward wording etc. that I would have missed even if I had time to read through the 200 page thesis 10 times in the final weeks before the deadline. My prompt for this task was deliberate however, “point out issues”, not “fix the issues”. I used it as a tool to scan through text to find issues, not to reformulate anything.

I have been considering experimenting with using AI to notify me of outdated documentation for Spade. We regularly have confused users who read some documentation and are surprised when the language has changed. I’m not reading through the docs on every release, so things like that do slip through, even though we now have some systems in place to prevent massive discrepancies. But an AI could also help here, we could give it a changelog, and it could suggest places in the documentation where things might need changing.

All of these things fall into one category of problems, problems where something has to go through large chunks of code or text to identify something, but where once it has been identified, it is easy to verify whether the answer is correct. This is a category where I could see myself using AI, perhaps today, certainly in a potential future where we have good open source LLMs that I can run on my hardware.

Addendum: Societal Issues

Finally, I just want to briefly mention the more societal issues with AI that also factor into my decision.

AI was trained on unethically sourced data. In my mind, that also poisons the output, but it is something I’d probably be willing to overlook without the other factors. The ship has already sailed as well, there is no way for us to go back to a non-AI world, unfortunately.

The licensing of AI generated code is in question. I have heard whispers about AI output being un-copyrightable, which in turn I believe makes it incompatible with some open source licenses. I am not sure that is a gamble I want to make in my projects.

AI uses power. How much is up for debate, but certainly less than if it isn’t used. I personally do not believe much in individual actions to climate issues, and I am wasteful in other ways. So I can stand on my high horse and proudly say “I don’t use AI because I care about the environment”, I think I’d quickly get off that high horse if it was the only remaining issue with AI :)