Preamble and disclaimer: I work at Meta, a company that has been famously working on AI for a long while. I do not, though, work on anything related to AI myself, don’t understand most of the implementation detail, and I don’t have any of the prerequisite knowledge to be able to get into the dirt of it. What I write here is my own, personal opinion and does not represent the opinion of my employer or my colleagues. And while I formed my opinion within the bubble, everything I’m talking about here is public knowledge. I’m also going to reference work done by both my employer and other companies, including Microsoft, of which I own minuscule amount of stock of, but which technically mean I’m receiving a financial benefit from the work these companies are doing.
2023 will likely be remembered in the annals of Computer Science and Information Technology as the year we all pivoted to AI. This has, quite understandably ruffled feathers, both because a lot of the AI discourse has been monopolized by terrible, quite annoying actors, and because of pretty much continuous overselling of what LLMs and other AI-branded technologies can do. And yes, I call them AI-branded because I’m honestly not a True Believer and don’t think that anything we have seen up to now is what I would call Artificial Intelligence.
On the other hand, there is an underground current of utility, at least for some of the tools and technologies developed during this push, starting from the Large Language Models (LLMs.) For those who at least have a rudimentary understanding of how these fit together, it’s obvious that there isn’t anything magical with these models, and that they cannot provide you with absolute truths: they can at best build some “understanding” (in quotes, because it’s not really understanding, it’s closer to a computer vision mapping) of a text based on the connections it can pattern match in its own corpus. It’s a bit more complex than my musing about crosswords, but the idea is similar.
Please note that I’m explicitly excluding from my discussion here the idea of “AI art”, first because my friend Jürgen already wrote about it, and second because personally I’m much more interested in having arts with artists behind. Maybe generative solutions can be used as a basis for making art based in a similar way I talk about it as a support for writing software, but that is not my field, so I will choose not to opine on that.
With this in mind, an LLM could be used on a set of texts as a search function on steroids (or possibly LSD, depending on how you look at it.) I have considered the idea to feed one of the LLMs the content of my blog and ask it to cluster posts based on broad topics as a starting point for me to organize some of my past work, though this is unlikely to happen in the short term. Do note here, that I’m not going to “let AI do the work for me” — it’s very clearly an assistive technology, not a replacement technology.
There is something else: LLMs that are specifically trained with source code for assisting programmers in their writing new software. Full disclosure, my employer, Meta, has released Code Llama which is pretty much that, an LLM trained on source code and designed to support software engineers. They are not alone, GitHub has released CoPilot years ago by now, raising a significant amount of eyebrows for the way they developed and trained their models on source code that might or might not have been permissively licensed. To be clear, I have absolutely no idea of what Code Llama has been trained on, so I make no statement on whether this applies to it or not.
Now, I care about licensing, so obviously I care about LLMs training data to be appropriately sourced, so I’m not going to ignore all of the controversies around LLMs in general, and code-trained LLMs in particular. But due to the risk of perceived conflict of interest (have you seen the size of the preamble I had to put on this post?) I’m not going to spend time opining on this. You can read here why.
I have made use of purposefully trained LLMs to contribute changes to already established projects within my organization, without having context on the styleguide or the organization of the code, and it provided a significant help in terms of allowing me not to spend time figuring out a number of common patterns for error handling and boilerplate. Now, it is true that something like this in the hands of more inexperience developers could be a terrible waste of time, particularly if used “against” open source projects the maintainers of which have limited time, but in this case, knowing what I needed to do, the LLM helped me to not worry too much on the how.
I believe that abstraction that make it possible for more people to learn to build software, particularly if they have a use for building very specific tools for their own niche of expertise, are good tools. One thing that I see people always afraid of, is that opening the door for more people means opening the door for people with less talent and… well, that will almost certainly be the case. Democratization of processes and solutions is always a trade-off, in the same way that selling routers might.
Personally, I have already shared that I believe the main problem is that education is too often focused only on the general theory, with little discussion of what the effects of software are in the real world. LLMs don’t necessarily improve this situation, but I also don’t think they are making it exceedingly worse — if anything, this may allow people who have the field-specific knowledge to write software rather than having to either commission it — or not even realize that it would be possible!
But yes, to be clear, I’m not here thinking that LLMs, code-specific or not, are miraculously going to make the world better (or worse). I don’t believe they are going to make programmers (or lawyers) obsolete and redundant. LLMs (code and not) will introduce a whole different set of problems, new ones that we’re not used to deal with. And while I cannot know for a fact that they are not going to be worse than what we have right now, I’m also not sure if they are going to be more tractable than what we have right now.
From what I can see, I could probably enjoy having an LLM help me contribute code for large projects, particularly those that come with a lot of guidelines I wouldn’t be able to “make mine” in my limited spare time, such as Home Assistant and Zephyr — particularly if the training could be done on the project, for the project. I can also foresee the possibility of LLMs used as code review assistants to point out failures that are too complex to write tests for normally — who’s going to be the first one to integrate a “falsehoods checker” that can be used against an arbitrary codebase to ensure that it does not fall trap of one of the many falsehoods programmers believe about real-life stuff?
In general, most of the critiques of LLMs as tools – quite different from the critiques of LLMs as messiahs or over-reliance on both the term AI and the ideas behind it – appear to me like they’re just critiques of CASE (Computer-Aided Software Engineering) tools in general, something that I obviously cannot get behind.
Some of those appeal to the gut feeling that, and I’m roughly quoting from multiple posts I’ve seen “programmers don’t want to repeat themselves”, and would avoid boilerplate, which is where the whole DRY paradigm comes from. Unfortunately, unless you tried to apply this principle, you are likely not aware of how messy it is to handle (and debug) multiple level of code generators! And let me not get started on the 4GL crowd. A (healthy) level of repetition can save a significant amount of debugging and headscratching when things don’t work out.
Another side the complaints come from, are from people who suggest that the only correct way to do software engineering is to do it without any assistance from better tools, by knowing everything, and that’s just silly in my opinion. And to make it more obvious why I find its silly, I’m going to compare our profession to some of the hobby maker videos you can find on YouTube and similar, expecting that at least a number of my readers have a similar tendency to binge watch those kind of videos (rather than, say, those of viral shitposters), no matter whether it is about electronics, furniture, or household hardware restoration.
Given any one topic, you can find videos out there at any level of expertise, from newcomers that just started recording themselves to do something to pass the time, to hobbyists that have learned mainly from other streamers, to professionals that have undergone training and worked in the field for a very long time. What makes it very clear where the separation is from the first two groups, though, is what tools these two categories use.
Newcomers can start with few, cheap, no brand tools and still complete tasks, even if not quite easily, or not quite as exact as it could have been if it was a professional doing it. But the hobbyists, and most of the professionals, use more tools, often more expensive, in many cases bulky, and requiring proper training not to hurt yourself using them (high power woodworking tools, high temperature soldering stations and reflow ovens, etc…) — but by using these tools, their results are often significantly better both in terms of precision, and in terms of time to completion.
It would be a mistake to suggest that you need those expensive and bulky tools to get good results — after all you can definitely find ton of maker videos where the creator is explicitly avoiding using tools like that (have you ever seen the beautiful furniture from Epic Upcycling?)
Another mistake would be to suggest that what makes the difference between the first two groups is solely the tools! A bunch of bulky, expensive, possibly dangerous, certainly complex tools don’t make a marked quality improvement when used by someone who has not the first clue on how they all fit together and what should be used for what!
But good tools in the hands of someone with even hobbyist-level expertise can make a significant difference, raising their game quite significantly! For a laugh, consider the early and more recent videos from Flipping Drawers — he’s not been shy to point out when he got himself new and better tools, and what difference they made for him.
I could also go on and refer again to one of my favourite books, Every Tool’s A Hammer by Adam Savage, as a lot of the commentary he provides is about tools for makers, and is a good parallel to CASE and software development, both in terms of engineering and art.
And this is where, in my view, CASE tools, and thus code-oriented LLMs, fit in software engineering. They can be a colossal waste of time in the hand of someone with not enough expertise – though thankfully not as dangerous for life and limbs as a table saw – but they can have a huge impact for people who know what they need to do.
I’m looking forward, in a few years times, to reflect back on how much our profession has improved based on the existence of these tools, while also knowing that there will be debris all over the place of “AI” business ideas that totally failed to get off the ground. For now, I’m going to keep an open mind — and write up any experience that I am allowed to, with the understanding that bubbles have quite a larger selection of tools in this particular space.