This Time Self-Hosted
dark mode light mode Search

After The Streams — Conclusion From My Pawsome Players Experiment

A few weeks ago I announced my intention to take part in the Cats Protection fundraiser Pawsome Players. I followed through with seven daily streams on Twitch (which you can find archived on YouTube). I thought I would at least write some words about the experience, and to draw some lines out of what worked, and what didn’t, and what to expect in the future.

But before I drop into dissecting the stream, I wanted to thank those who joined me and donated. We reached £290 worth of donations for Cats Protection, which is no small feat. Thank you, all!

Motivations

There’s two separate motivations to look at when talking about this. There’s my motivation for having a fundraiser for Cats Protection, and then the motivations for me doing streams at all, and those needs to be separated right away.

For what concern the choice of charity – both me and my wife love cats and kittens, we’re childfree cat people. The week happened to culminate in my wife’s birthday and so in a way it was part of my present for her. In addition to that, I’m honestly scared for the kittens that were adopted at the beginning of the lockdown and might now be left abandoned as the lockdown eases.

While adopting a kitten is an awesome thing for humans to do, it is also a commitment. I am afraid for those who might not be able to take this commitment to full heart, and might find themselves abandoning their furry family member once travel results and they are no longer stuck at home for months on end.

I also think that Cats Protection, like most (though not all) software non-profit organizations, are perfectly reasonable charities to receive disposable funds. Not to diminish the importance and effort of fundraisers and donations to bigger, important causes, but it does raise my eyebrow when I see that NHS needs charitable contributions to be funded — that’s a task that I expect the government taking my tax money should be looking at!

Then there’s the motivation for me doing livestreams at all — it’s not like I’m a particularly entertaining host or that I have ever considered a career in entertainment. But 2020 was weird, particularly when changing employer, and it became significantly more important to be able to communicate across a microphone, a camera and a screen the type of information I would usually have communicated in a meeting room with a large whiteboard and a few colour markers. So I have started looking at way to convey more information that don’t otherwise fit written form, because it’s either extemporaneous, or require a visual feedback.

When I decided to try the first livestream I actually used a real whiteboard, and then I tried this with Microsoft’s Whiteboard. I have also considered the idea of going for a more complex video production by recording a presentation, but I was actually hoping for a more interactive session with Q&A and comments. Unfortunately, it looks only a few people ever appeared in the chatrooms, and most of the time they were people who I am already in contact with outside of the streams.

What I explicitly don’t care for, in these streams, is to become a “professional” streamer. This might have been different many years ago — after all, this very blog was for a long time my main claim to fame, and I have been doing a lot of work behind the scenes to make sure that it would give a positive impression to people, and it involved also quite a bit of investment not just in time but in money, too.

There’s a number of things that I know already I would be doing differently if I was trying to make FLOSS development streaming a bigger part of my image — starting with either setting up or hiring a multiplicator service that would stream the same content onto more than just Twitch. Some of those would definitely be easier to pull off nowadays with a full-time job (cash in hand helps), but they would be eating into my family life to a degree I’m no longer finding acceptable.

I will probably do more livestreams in the upcoming months. I think there’s a lot of space for me to grow when it comes to provide information in a live stream. But why would I want to? Well, the reason is similar to the reason why this blog still exists: I have a lot of things to say — not just in the matter of reminding myself how to do things I want to do, but also a trove of experience collected vastly by making mistakes and slamming head-first into walls repeatedly – and into rakes, many many rakes – which I enjoy sharing with the wider world.

Finally (and I know I said there’s two motivation), there’s a subtlety: when working on something while streaming, I’m focusing on the task at hand. Since people are figuratively looking over my shoulder, I don’t get to check on chats (and Twitter, Facebook, NewsBlur), I don’t get to watch a YouTube video in the background and get distracted by something, and I don’t get to just look at shopping websites. Which means that I can get to complete some open source hacking, at least timeboxed for the stream.

Tangible Results

Looking back at what I proposed I’d be doing, and what I really ended up doing, I can’t say I’m particularly happy about the results. It took me significantly longer to do some things that I expected would take me no time whatsoever, and I didn’t end up doing any of the things I meant to be doing with my electronics project. But on the other hand, I did manage some results.

Beside the already noted £290 collected for Cats Protection (again, thank you all, and in particular Luke!), I fully completed the reverse engineering of the GlucoMen areo glucometer that I reviewed last week. I think about an hour of the stream was dedicated to me just poking around trying to figure out what checksum algorithm it used (answer: CRC-8-Maxim as used in 1wire) — together with the other streams and some offline work, I would say that it took about six hours to completely reverse engineer that meters into an usable glucometerutils driver, which is not a terrible result after all.

What about unpaper? I faffed around a bit to get the last few bits of Meson working — and then I took on a fight with Travis CI which resulted in me just replacing the whole thing with GitHub Actions (and incidentally correcting the Meson docs). I think this is also a good result to a point, but I need to spend more time before I make a new release that uses non-deprecated ffmpeg APIs — or hope that one of my former project-mates feel for me and helps.

Tests are there, but they are less than optimal. And I only scratched the surface of what could be integrated into Meson. I think that if I sat down with the folks who knows the internal in a chat I might be able to draw some ideas that could help not just me but others… but it turns out that involves me spending time in chat rooms, and it’s not something that can be focused on a specific time slot a week. I guess that is one use where mailing lists are still a good approach, although that’s no longer that common after all. GitHub issues, pull requests and projects might be a better approach, but the signal-to-noise ratio is too slow in many cases, particularly when half the comments are either pile-on or “Hey can you get to work on this?”. I don’t have a good answer for this.

The Home Assistant stream ended up being a total mess. Okay, on one half of it I managed to sync (and subsequently get merged) the pull requests to support bound CGG1 sensors into ESPHome. But when I tried to set up the custom component to be released I realized that first, I have no idea how to make a Home Assistant custom component repository – there’s a few guidelines if you plan to get your component into HACS (but I wasn’t planning to), and the rest of the docs suggest you may want to submit it to inclusion (which I cannot do because it’s a web scraper!) – and the second is that the REUSE tool is broken on Windows, despite my best efforts last year to spread its usage.

The funny thing is that it appears to be broken because it started depending on python-debian, which mostly reasonably didn’t expect to have to support non-Unix systems, and thus imported the pwd module unconditionally. The problem is already fixed on their upstream repository, but there hasn’t been a release of the package in four months and so the problem is still there.

So I guess the only thing that worked well enough throughout this is that I can reverse engineer devices in public. And I’m not particularly good at explaining that, but I guess it’s something I can keep doing. Unfortunately it’s getting harder to find devices that are not either already well covered, or otherwise resistant to the type of passive reverse engineering I’m an expert in. If you happen to have some that you think might be a worthy puzzle, I’m all ears.

Production and Support

I have not paid attention too much about production. Except for the one thing: I got myself a decent microphone because I heard my voice in one of the previous streams and I cringed. Having worked too many years in real-time audio and video streaming, I’m peculiar about things like that.

Prices of decent microphones, often referred to as “podcasting” microphones when you look around, skyrocketed during the first lockdown and don’t appear to have come quite down yet. You can find what I call “AliExpress special” USB microphones that look fancy studio mics on Amazon at affordable prices, but they pretty much only look the part, not being comparable in terms of specs — might be just as tinny as your average webcam mic.

If you look at “good” known brands, you usually find them in two configurations: “ready to use” USB microphones, and XLR microphones — the latter being the choice of more “professional” environments, but not (usually) directly connected to a computer… but there’s definitely a wide market of USB capture cards and they are not that much more expensive when adding it all together. The best thing about the “discrete” setup (with an XLR microphone and an USB capture card/soundcard) is that you can replace them separately, or even combine more of them at a lower cost.

In my case, I already owned a Subzero SZ-MIX06USB mixer with USB connection. I bought it last year to be able to bring in the sound from the ~two~ three computers in my office (Gamestation, dayjob workstation, and NUC) into the same set of speakers, and it comes with two XLR inputs. So, yes, it turned out that XLR was a better choice for me then. The other nice thing about using a mixer here, is that I can control some of the levels on the analog side — because I have a personal dislike of too-low frequencies, so I have done a bit of tweaking of the capture to suit my own taste. I told you I’m weird when it comes to streaming.

Also let’s me be clear: unless you’re doing it (semi-)professionally, I would say that investing more than £60 would be a terrible idea. I got the microphone not only to use for the livestream, but also to take a few of the meetings (those that don’t go through the Portal), and I already had the mixer/capture card. And even then I was a bit annoyed by the general price situation.

It also would have helped immensely if I didn’t have an extremely squeaky chair. To be honest, now that I know it’s there, I find it unnerving. Unfortunately just adding WD40 from below didn’t help — most of the videos and suggestions I found on how to handle the squeaks of this model (it’s an Ikea Markus chair — it’s very common) require unscrewing most of the body to get to the “gearbox” under the seat. I guess that’s going to be one of the tasks I need to handle soon — and it’s probably worth it given that this chair already went through two moves!

So hardware aside, how’s the situation with the software? Unfortunately, feng is no longer useful for this. And as I was going through options last year I ended up going for Streamlabs OBS for the “It vastly works out of the box” option. Honestly, I should probably replace it with OBS Studio, since I’m not using any of Streamlabs’ features, and I might as well stick to the original source.

As I said above, I’m not planning to take on streaming as a professional image — if I did, I probably would have also invested in licensing some background music or a “opening theme”. And I probably would have set up the stream backgrounds differently — right now I’m just changing the background pictures based on what I shot myself.

Conclusions

It was a neat experiment — but I don’t think I’ll do this again, at least not in this form.

Among other things, I think that doing one hour of stream is sub-optimal — it takes so long to set up and remind people about the chat and donations, and by the time I finished providing context, I was already a quarter of the hour in. I think two to three hours is a better time — I would probably go for three hours with breaks (which would have been easier during the Pawsome Players events, since I could have used the provided videos to take breaks).

Overall, I think that for this to work it needs a bigger, wider audience. If I was in the same professional space I was ten years ago, with today’s situation, I would probably be having all kind of Patreon subscriptions, with the blog being syndicated on Planet Gentoo, and me actually involved in a project… then I think it would made perfect sense. But given it’s “2021 me” moving in “2021 world”… I doubt there’s enough people out there who care about what goes through my mind.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.