2 minute read

One of the big trends in software in the last few years has been a big push towards AI-assisted coding. Large language models (LLMs) are now powerful enough to solve relatively complex software development tasks autonomously, with some degree of accuracy. Experienced engineers can use this to become more efficient, and seemingly, users without any programming skill can create their own applications using “vibe coding”. LLMs are an impressive innovation, but many people also have concerns about it. We would like to state our point of view on this emerging space.

First, LLMs are by nature stochastic and can’t be relied on to produce a precise and reliable result at scale. Not all software needs to be correct, but when correctness is essential, including for fundamental scientific tools (such as Slacken and Discount), LLM coding is not enough by itself. Humans must check and edit the LLM’s output, particularly when systems scale up and evolve over time. But for any serious system, the fact that people understand the technology is an asset, not a liability, as people are capable of trust and responsibility.

We think that LLMs can potentially accelerate some technical work, but unless the project is very casual, this can only happen as part of a thoughtful software development process. They should be used alongside unit tests, documentation, careful use of types, version control, code review, and other established practices that developers have relied on for decades. What used to be helpful for communicating with team members is now also helpful for directing the AI. If AI is to be used in a project, the best way to manage it would be through long established fundamentals.

There are many additional unresolved issues, even if quality can be managed. At the moment, the major LLMs used for AI-assisted coding are unfortunately opaque and controlled by a handful of companies. There are serious questions about their efficiency, environmental impact and the ethics of how their training data was obtained. We would like to see a future with community-owned, efficient models where the training data is transparently sourced, rather than the centralised ownership we see today. Additionally, AI must be used responsibly and intentionally. Open source projects are currently being overwhelmed by AI-generated PR submissions, which is taking a big toll on the energy of project maintainers.

These are just a few of the issues surrounding the topic. The Rust project has compiled a useful overview of AI related concerns, which we found was a very helpful document to navigate current issues in this space. There are also many other concerns surrounding the application of GenAI to areas such as art and writing, which we will not get into here.

We believe that the best software tools are made by humans, for humans, and that this will remain the essence of our field, even as AI enters the picture.

Updated: