As noted before, I have used kind (Kubernetes in Docker) in home for a while just as Docker compose replacement (and to tinker with some Kubernetes-only tools at home). For a while I have wanted something I could upgrade, and in general HA, and kind is not that. So I bought some hardware (see earlier post). Then I setup some software (this post). What did I want? I wanted HA tinfoil hat cluster, in other words: ...
Kubernetes at home, next generation, part 2/2: Software
As noted before, I have used kind (Kubernetes in Docker) in home for a while just as Docker compose replacement (and to tinker with some Kubernetes-only tools at home). For a while I have wanted something I could upgrade, and in general HA, and kind is not that. So I bought some hardware (see earlier post). Then I setup some software (this post). What did I want? I wanted HA tinfoil hat cluster, in other words: ...
Some notes about Tailscale
This interlude is some thoughts about Tailscale. I am mostly happy user of it. I will perhaps write more about Kubernetes next. My background, pre-Wireguard solutions I have been involved on and off with VPNs for 25 years and change. I even co-authored some IPsec standards on the topic, and I also worked on multiple implementations of it as well. IPsec has been to some extent superseded by the various TLS VPN solutions due to ease of deployment (presumably), and interest in mostly securing just TCP traffic, for which typical TLS VPNs are just fine. ...
Kubernetes at home, next generation, part 1/2: Hardware
I have been running Kubernetes at home from October 2024 onward. That exercise was single-node though, using (relatively small) part of the Frankenrouter resources. This is about next Kubernetes iteration.. or its hardware choice. Why I did not want to stick with the kind setup forever? Frankenrouter hardware (Intel N305) at least officially supports only 32GB of RAM. In addition to OpenWrt LXC container, and some native Debian processes, it is packing about 49 containers at the time of writing (give or take few, this Grafana thing is only an approximation based on unique images on podman side and pods on Kubernetes side): ...
Kubernetes at home, next generation, part 1/N: Hardware
I have been running Kubernetes at home from October 2024 onward. That exercise was single-node though, using (relatively small) part of the Frankenrouter resources. This is about next Kubernetes iteration.. or its hardware choice. Why I did not want to stick with the kind setup forever? Frankenrouter hardware (Intel N305) at least officially supports only 32GB of RAM. In addition to OpenWrt LXC container, and some native Debian processes, it is packing about 49 containers at the time of writing (give or take few, this Grafana thing is only an approximation based on unique images on podman side and pods on Kubernetes side): ...
IPv6 or lack of it (by default), 2025
Our startup ( Time Atlas Labs ) has had more (physical) addresses than it really should - including pre-company forming era, we are now in our 3rd office in period of a year. The networking, in general, has been universally pretty bad, until today. As this is a rant, and I don’t want to particularly blame any specific ISP, the ISPs are left anonymous. Office 1: Landlord-provided internet It was slow and unreliable (‘reset the router’) was the approach to deal with it. ...
Beer consumption analysis using LLMs
I have been working on a life tracking app since last year. To analyze the data I have logged using it, I queried it for ‘beer in 2025’ and analyzed results. The dataset itself I will not publish here, but there are three types of relevant data there (in parentheses how they are encoded in the Markdown output that I pass to the LLMs): Place visits involving beer ( e.g. * 2 hours spent in <insert pub here>) Journal entries mentioning beer ( e.g. I had beer and pizza for lunch) Explicitly counted beer logging ( e.g. - 3 count beer) Baseline - shell egrep 'count beer$' 20250528-beer.md | cut -d ' ' -f 2 | awk '{sum += $1} END {print sum}' 17 So the expectation is that the number should be AT least 17 beers, but ideally more, as there are some journal entries which mention beer. ...
April vibe coding summary
This will be the last post on vibe coding for now, I promise.. ( at least about Google Gemini 2.5 Pro Exp ). I did some vibe coding every weekend in April, just to get a change of pace from work (and for science), starting with ‘what if I could not code’ experiment (not great success), and finishing with two probably useful tools that I wanted. Last week Google made Gemini 2.5 Pro Exp flash available commercially, and reduced the free input token rate limit per day quite a lot. The new limits are (as of now) million input tokens, 25 requests per day (no idea about output tokens). Single request maximum size is probably still? 250k tokens (I hit it couple of times earlier, not sure if it was reduced as most recent project was smaller and I didn’t get beyond 100k token requests). ...
Vibe coding try 2: feat. Gemini 2.5 pro exp
I was not particularly satisfied with my experience of doing fully hands-off vibe coding, but I wanted also to see what I can do if I spend bit more thinking and instructing the LLM before hitting ‘send’ button. So another Sunday spent ‘usefully’. Gemini 2.5 pro exp is free(!) (for now) The shocking part is that Gemini 2.5 pro is currently available in free tier of Google AI Studio (and to chat with at Gemini). The quota is quite generous - you can do essentially up to 25 M tokens per day (25 request limit per day, 1M context size - I did not get quite that far as my requests were <= 100k context size). ...
Aider 0.8.1 and me
I have been using Aider on and off for a couple of months now. I have found its defaults to be pretty bad (at least for me), and so I decided to write up on how I use it and the configuration I use with it. Note: ‘model’ in this text refers to large language models (LLMs), and more specifically, those that are reasonably good at reasoning/coding tasks. Currently I am using mainly Claude 3.7 Sonnet, but the model I use seems to change every month (o3-mini high-reason was the one I used last month), and the recent Deepcoder release makes it possible I will try using local model again soon as my main model. ...