The Future of AI Coding
Prediction: By 2026, AI assistants will be an invaluable part of the workflow of most programmers. I don’t think we will fully embrace vibe coding, but I do believe AI will automate most mechanical editing tasks.
If you think about what we do every day as programmers, a lot of it is repetitive and predictable, which makes it suitable for automation. For example, let’s say you need to make a GET request to a REST API deserialise the result; instead of going through the motions yourself, just feed a prompt into the LLM and let the AI make the edit for you.
Will AI replace programmers?
Code will still be around; and by extension, so will programmers. Human language is just too ambiguous to be useful as a specification for a system.
If you think about it, people use formal notation all the time, even when it’s not strictly required. For example, two mathematicians working on a proof will probably do most of the work in mathematical notation on a whiteboard, even though they could theoretically communicate only in natural language. (I think Dijkstra predictions about natural programming still hold1)
Local-first
Given that programmers will adopt AI-assisted coding, what will it look like in practice? I think (and hope) that we will move away from the proprietary and centralised solutions we have now (Copilot, Claude Code, etc) towards open and local-first models.
While these models will need to be smaller and less powerful in order to run on a desktop, they have some important advantages:
- No need to send sensitive information to a third party
- Local models can be fine-tuned on the existing code base and coding style of the team
- Developers still need to do other computationally expensive tasks on their work laptops (such as opening a PDF in Adobe Acrobat), so you already have GPU, CPU and RAM available.
Another possibility could be “on-prem” models that are hosted on company servers; this would still allow bespoke fine-tuning and usage with sensitive information, but would open up for running larger models than what would be possible on e.g. a laptop or desktop workstation.
State of the art
Currently the models you can run yourself locally are much weaker than what’s currently available. However, if you still want to try them yourself, here are some tools that work with local models:
- Aider is a CLI application that lets you prompt code changes to your repository
- Continue is an alternative to Copilot / Cursor. There is a guide to setting it up with Ollama on the Ollama blog
- Cline
- Roo