This Time Self-Hosted
dark mode light mode Search

So I Started Playing With Claude Code

I know it’s getting old having to preamble this, but what you’re about to read is my own personal opinion, does not represent my employer, my employer has had no preview of what I’m writing, etc.

Before someone tries putting words in my mouth (or as it turns out, on my blog), no article you’ve read or you’ll ever read on my blog is AI generated – I’m using the term “AI” from here on out even though I disagree with the naming, but I reserve my pedantry for “VPN”, you must choose your battles – and there will be no intentional AI-generated illustrations, you’ll either get nothing, or something I paid an illustrator for. I specified “intentional” here because honestly, if I’m going to commission more illustrations in the future, I have no idea who is going to be commissioned, and much as I can spot the obvious AI generation, who knows what, for example, fiverr could end up throwing at me.

And, if anyone tries to suggest that I’m using AI to write because of the presence of em- and en-dashes throughout my writing, I’m happy for them to stop reading what I write — logic and comprehension wouldn’t be their strong suits, given that it’s a silly meme and I’ve been using these for longer than some of them have been alive and me using them on this blog predates LLM being invented.

If you were to look through my pictures, some metadata might flag “AI” because, I believe, Lightroom metadata-tags images where you use any of their so-called AI tools — I can tell you already that pretty much the only tool I use from those is the content-aware masking, to save myself some time from very carefully selecting the objects I want to change some parameters on. I have used their content-aware fill before, but before they started putting the word AI on anything that is even vaguely assistive.

But, I’m not going to swear off from using AI tooling for software development, so it is possible that, in the future, some of my published open source projects have been (partially) edited through an AI tool such as GitHub Copilot or Claude Code. And I know this will likely lose me some readers, and even gain me some outright enemies — it’s not the first time that taking a moderate approach has done so, and it won’t be the last.

I have been known to change my mind, and to not be a strong opponent to leverage even proprietary tools to achieve a goal — when GitHub was launched, I wasn’t a fan. I changed my mind when I found that read-only Git repositories are hard to use, and when Microsoft acquired it, I spelled out my stance on the topic.

So, even though I have not looked into them outside my bubble for the past couple of years, I have decided to spend some time last December to see whether those tools are at a point of being useful, to me in particular. And for the most part, the answer appears to be “yes, with many caveats.”

Before diving into the more technical details, I want to at least acknowledge that many people have an ethical issue with the usage of AI even for the purpose of coding — and I personally take a slightly different view, which shouldn’t surprise long-term readers of this blog. Leaving aside the resource usage of AI in general (a topic which I’m just not able to have a clear conversation on as it’s too far outside of my expertise, which I will than delegate on to more experienced people), the other large ethical issue with generative AI is obviously the training material.

I empathise with artists – both visual and wordsmiths – on the, in my view, despicable treatment of non-public-domain content for training of systems that is by now widespread. In pretty much the same way I do empathise with the same groups when it come to content piracy.

I’m less inclined to clutch my pearls when it comes to training on source code that has been released in the open, no matter the license applied to it. The main complains I see from a large number of Free Software developers, is that the training of tools like Claude Code includes code released under the GNU GPL, asserting that by doing so, it goes onto making all of the generated code a derivative of a GPL’d codebase — this is preposterous for many reasons, though I’ll admit to be wrong if they’ll ever successfully argue that in court.

GNU’s own Freedom One is «The freedom to study how the program works, and change it to make it do what you wish.» Making the source code available online for others to learn is fundamental to Free Software even more than being able to redistribute modified copies of the software. And very possibly this is what made me so interested in Free Software to begin with — the ability to know how things are implemented even though I have no intention or ability to improve them or modify them.

Looking at code written by others to improve one’s skills is something that humans have been doing ever since source could be shared in the first place. While we frown upon directly copying source code without attribution, or against the terms of its source code license, we don’t usually expect that, if you learn how to reverse a doubly-linked list from one project, you can’t implement it yourself in another without attributing it to wherever you read it.

There is an obvious bar at which point you’re copying, rather than just being inspired, but even when reverse engineering something, there’s an idea of separation between reading the code that does something, documenting it, and reimplementing it. While you can have some questions when the person reading the code and reimplementing it anew might be copying it tout-court, you wouldn’t extend that to someone that is reading the description of how it should be working, and implement it without knowledge of the original. If that code looked an awful lot like the original, more often than not that’s because there are no two ways to implement it, and there’s not much you can do about that.

Which is not to say that everything is already perfectly fine — you need to make sure that the output generated by a tool is not regurgitating its input as they are, but I don’t see the same ethical issues with using generative AI for coding as I see for art — though that is obviously from my point of view, seeing coding as a mean to an end, and not strictly a creative expression. I know not everyone agrees.

I am not, at this point, going to be “vibe coding” anything for others to use — not that most of my Free Software output in the last few years have been useful to anyone else but me. I have, though, started using Claude Code to assist into sorting out a few things on existing projects.

For example, I have been using a rewritten Home Assistant component for Sony Songpal. I wrote it a couple of years back, in London, when we bought a Sony HT-A7000 soundbar, and I wanted to have more control than the very basic settings the original component had, but I hit a bit of a roadblock trying to get this merged upstream — though I did manage to get some of the fixes into the supporting library. I knew that a lot has changed in the APIs that I was using for that component, and it would have stopped working with one of the 2026 early releases.

As I have previously noted, last year I’ve been busy, and there was no way for me to go ahead and fix up everything that’s wrong with my custom component. So I set up a new virtual machine, checked out the latest repository of Home Assistant, copied over my existing custom component from the running instance, and asked it to backport my changes into Home Assistant, applying the various improvements to the API that HA had.

Was it perfect? No. Did it work the first try? Also, no. But it did give me something to start with, basically saving me a good couple of hours to read through the ChangeLog and figure out all of the small things that have changed and improved in the meantime. I’m sure if I had listed explicitly the list of API improvement from the HA Developers blog, it would probably have done a better job as well. But it did make the difference between me keeping procrastinating getting this working, and have it almost completely done. It might even allow me to finally get this merged, once I have a moment to figure out if that’s okay for Home Assistant to submit an AI-assisted pull request.

The other Home Assistant adjacent task I threw to the GPUs was related to the EcoFlow battery I wrote about previously. As I said in the post, there’s not a lot of APIs available for this type of batteries, because they’re primarily targeted as a portable power station, or (in particular the version I have), as an UPS for a single computer (which would then have a local connection, but… let’s not get into how buggy that is.) But there is a HACS non-official integration for EcoFlow — which doesn’t (or didn’t, at the time of writing) support the Delta 3 Plus that I own.

My expectation was that something was different with the way the API was implemented for this model, so I gave Claude a checkout of the existing HACS integration, the API keys from the EcoFlow developer portal, and asked it to figure out how to adapt those APIs for the Delta 3 Plus. Pretty much free reign on how to get there.

After half our or so, its attempts lead to the answer that the Delta 3 Plus is not supported by the public API, could it get the app user and password instead, for the reverse engineered API that the integration also uses? Sure, why not. A bunch of attempts later, it established that the Delta 3 Plus is actually just a Delta 3, from the integration point of view. So no change was needed to the integration, except maybe some documentation. Again, I have not sent those as a pull request until I can confirm it will be welcome.

If you were to wonder why Linux developers won’t “ban” the use of AI — it’s also things like these. I have technically used AI in here to help my development, even though I have used exactly zero lines of code generated by the AI. The aid, in Computer-Aided Software Engineering, is not always the codegen itself.

Where I did get Claude to generate a lot of code for me, was setting up my infrastructure — as you know, I’m a regretful self-hoster of this blog. After years hoping not to have to be a sysadmin of my own personal infra, I did end up with three production vservers, and one NAS at home with at least one permanent Virtual Machine. I don’t like this, quite honestly — none of these are “labs” where I would be coming up with creative solutions, these are unfortunately servers where I have important, personal data and I rely to keep working well.

For way too long a time, these servers were very bespoke managed: Docker Compose at best, a little bit of monitoring via Prometheus, and a lot of hope that I didn’t screw things up too badly. But it was getting to me, that I was putting a lot of hopes on them. Long was the time since I was doing this for a living and could afford to spend time setting up Puppet, so I had to find a better solution.

The better solution, for the time being, turned out to be Ansible. But I knew nothing of Ansible, and it would likely have taken me weeks just to figure out what I could even do with it. Instead I threw this one to the pile, too. As it turns out, LLMs are faster than me at synthetising information out of documentation and source code, and generate possibly thousands of lines of incomprehensible YAML.

There might very well be bad ideas in the way those playbooks were generated, but they work which is more than I can say for anything I had before. They are also never going to see the light of day to someone else, so I don’t have to think very hard about whether this generated code is creative enough for copyright assertions (and still, I would say the answer is “it isn’t” — there’s not that many way to set up the same set of commands!)

All of this does not turn me into an absolute believer, let me be clear. By personal experience, there’s a lot to software engineering than writing the code. And there are a lot of risks with taking the output of LLMs without understanding it, or the higher level engineering requirements. But I do think that we’re turning at a point where AI-assisted, or AI-guided development will become the norm about as much as parametric CAD has completely replaced hand-drawn blueprints. It might not be as creative, or as impressive, but this is the world we live in after all.

I also want to make one more thing clear, which a lot of people seem to be missing. CASE was never about removing the human from the loop: AI-generated code might get you 80% (or even 90% if trivial) of the way where you want to be, but it does not mean it will always work right. But if it shortens the readiness loop by days or weeks, well…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.