This will be the last post on vibe coding for now, I promise.. ( at least about Google Gemini 2.5 Pro Exp ). I did some vibe coding every weekend in April, just to get a change of pace from work (and for science), starting with ‘what if I could not code’ experiment (not great success), and finishing with two probably useful tools that I wanted. Last week Google made Gemini 2.5 Pro Exp flash available commercially, and reduced the free input token rate limit per day quite a lot. The new limits are (as of now) million input tokens, 25 requests per day (no idea about output tokens). Single request maximum size is probably still? 250k tokens (I hit it couple of times earlier, not sure if it was reduced as most recent project was smaller and I didn’t get beyond 100k token requests). ...
Vibe coding try 2: feat. Gemini 2.5 pro exp
I was not particularly satisfied with my experience of doing fully hands-off vibe coding, but I wanted also to see what I can do if I spend bit more thinking and instructing the LLM before hitting ‘send’ button. So another Sunday spent ‘usefully’. Gemini 2.5 pro exp is free(!) (for now) The shocking part is that Gemini 2.5 pro is currently available in free tier of Google AI Studio (and to chat with at Gemini). The quota is quite generous - you can do essentially up to 25 M tokens per day (25 request limit per day, 1M context size - I did not get quite that far as my requests were <= 100k context size). ...
Aider 0.8.1 and me
I have been using Aider on and off for a couple of months now. I have found its defaults to be pretty bad (at least for me), and so I decided to write up on how I use it and the configuration I use with it. Note: ‘model’ in this text refers to large language models (LLMs), and more specifically, those that are reasonably good at reasoning/coding tasks. Currently I am using mainly Claude 3.7 Sonnet, but the model I use seems to change every month (o3-mini high-reason was the one I used last month), and the recent Deepcoder release makes it possible I will try using local model again soon as my main model. ...
Vibe coding try 1 .. spoiler: not great success
Vibe coding has been frequently touted in the internet, and not wanting to feel left out, I spent half a day working on ‘something’ I picked from depths of my todo list: a Python utility to convert from format X to format Y (particular format is not relevant so omitted here - nested data structures with tags, and keyword-values). The vision I decided I wanted to pretend I don’t know how to code. So I for most part chose not to write any code myself, but instead guide (a set of) LLMs to produce what I wanted, mostly just specifying which files I want to touch and to do what. ...
Why structured logging is the thing
When I wrote the first iteration of the Lixie tool about year ago (early 2024), my idea was to identify which logs were boring (most of them), interesting (very few of them) and unknown (not yet classified). At the time I chose not to use ‘AI’ (LLMs) for it, and I am still not that convinced they are the best way to approach that particular problem. Ultimately it boils down to human judgment of what is useful is much more realistic (at least in my context) than what the LLMs ‘know’ (absent fine-tuning and-or extensive example sets which I do not by definition have for my personal logs). After choosing not to use LLMs for it, it was just matching exercise - structured logging messages against an ordered set of rules. ...
How I write notes.. and why?
Over the time my ways of writing notes have evolved. I think writing things down helps me both to retain extended working memory of things I have done over time, as well as process them (sometimes much, much later). I write this blog mainly just to organize my own thoughts too, as opposed to actually writing for an audience (by design, I keep no statistics of readers, and I do not particularly want to interact with hypothetical reader or two that might stumble here, sorry - I believe most of the visitors are AI scraper bots anyway). ...
From Hue (to back) to Home Assistant
Background I think I wrote about this in some earlier blog post too, but I have used various home automation solutions for awhile now. I started out with very, very early Home Assistant builds, not quite sure when, but I contributed little in 2014 to it at least (based on git log). Later in 2014 I started developing my own solution with somewhat different decentralized model ( GitHub - fingon/kodin-henki: ‘Spirit of home’ - my home automation project written in Python 2/3. ), which I used about 5 years and then switched to much less featureful but also less maintenance requiring Philips Hue system. ...
3D printing once more
The last 3d print I did was in August 2023 (spice rack boxes). This time around, I had a need of something to hang Philips Hue motion sensors from. Ideally without making holes in the walls. The exercise took most of the weekend. Or at least, 3d printer was active for a lot of it. I did not spend that much time designing the thing obviously. My 3d printing process I used to use the Fusion 360 design tool but awhile ago I realized that describing geomery is more of a thing for me than drawing it out using a mouse. So I switched to OpenSCAD. Its language is .. an acquired taste, though, so I am solidpython2 to generate the .scad files out of Python script which describes the geometry I want. ...
NVidia L40S - reasonably priced LLM runner in the cloud?
As we are currently doing things in AWS, I wanted to evaluate AWS EC2 g6e.xlarge (32 GB RAM EPYC 4 cores, with 48 GB nvidia L40S GPU), as it seems to be only AWS offering that is even moderately competitive at around 1,8$/hour. The other instance types wind up either with lots of (unneeded) compute compared to GPU, or have ‘large’ number of GPUs, and in general the pricing seems quite depressing compared to their smaller competitors (e.g. https://datacrunch.io/ provides 2 L40S at 1,8$/hour, and also 1 A100 is similarly priced). ...
Finally working modern mesh wireless network at home
TL;DR: Unifi mesh is bad, Orbi is pricey, TP-Link is surprisingly good. Recap (2024 home wifi history) I had Netgear Orbi (75x series) for 4 years (2020-2024) Last summer, I experimented with Unifi (see earlier posts); to put it bluntly, it sucked for mesh use, and I went back to the Orbis The Orbi still did not support wifi 6E which modern Macbook Pros need for more than 1200mbps wifi phy (= more than 600mbps data rate) So, I was on the hunt for more hardware.. New challenger is found In early Black Friday deals in mid-November, I spotted a TP-Link Deco BE65 set at a quite reasonable discount. On the paper, it seemed quite promising. Why is that? ...