Laingsoft

Fundamentally throwing good software at problems.


The power of SSH and Claude

Free network administration!

Recently I’ve been on a bit of a kick building out my home network. I finally took the plunge and switched most of my home network from Linux Mint to pure Debian. My home server is running Debian 13, my 3D Printer server is running Debian, even the laptop I’m typing this on is running Debian. I chose Debian because I wanted a lightweight-ish distribution, that was batteries included. Neither my print server or my home server needed a desktop environment, so I just installed it without, and enabled the sshd so that I could administer it.

Interestingly I did the same with my laptop, but instead I chose to install DWM. I’m still learning my way around it, but I’ve been using it quite a lot for the past few months and I can say that I’m really starting to like it. This laptop was largely useless before. I bought it to bring to Defcon 25 way back when, and I just threw Mint on it. It ran okay-ish, but it really was a dog.

fastfetch

As you can see, not a whole lot of power to run with. However, because the install is so minimal, the performance is actually totally usable for most things. It can handle firefox and most websites. It’s actually kind of unreal to me that this laptop actually came with Windows 10 installed. I don’t think I ever even actually tried to run it with windows. When I bought the computer I actually got a 25% discount, because it was an open box, when I asked why it was returned, they said the notes said it was too slow to use. The screen is pretty shitty, poor viewing angles, not particularily bright. The touchpad is… okay? The keyboard is actually surprisingly alright. Speakers are trash, the webcam looks like it’s from a Nokia from 2005, not that it really matters for what I use it for.

Regardless, Debian is unbelievably stable, and without a desktop environment this thing just sips power. The battery lasts multiple days on a full charge, hence the 24-day uptime in that fastfetch. Although it’s pretty starved at processing power, what it is really, really good at is being an SSH client.

I’ve been building out my network over the last few months, setting up local DNS, using ssh-key-copy from machine to machine so that I can hop across machines super easily. My network of machines looks like this:

                          ┌──────────────────────┐
                          │   192.168.0.0/24     │
                          │      Network         │
                          └─────────┬────────────┘
                                    │
            ┌───────────┬───────────┼───────────┬───────────┐
            │           │           │           │           │
      ┌─────┴────┐  ┌───┴─────┐  ┌──┴──────┐ ┌──┴──────┐ ┌──┴────────┐
      │   wang   │  │ bigtony │  │cnidaria │ │ magpie  │ │  amoeba   │
      │  .209    │  │  .163   │  │  .201   │ │  .104   │ │   .161    │
      ├──────────┤  ├─────────┤  ├─────────┤ ├─────────┤ ├───────────┤
      │GPU Server│  │ Garage  │  │3D Print │ │  Mail   │ │ Primary   │
      │          │  │ Desktop │  │+ Pi-hole│ │ Server  │ │Workstation│
      │8x Tesla  │  │         │  │         │ │         │ │           │
      │   M10    │  │Xeon     │  │ Klipper │ │ exim4   │ │Xeon E3    │
      │ 46GB RAM │  │W3550    │  │  DNS    │ │Celeron  │ │Intel Arc  │
      │          │  │16GB RAM │  │         │ │         │ │ 16GB RAM  │
      │ Ollama   │  │         │  │         │ │         │ │           │
      │ Jellyfin │  │         │  │         │ │         │ │           │
      └──────────┘  └─────────┘  └─────────┘ └─────────┘ └───────────┘

wang (named after wang computers) is an IBM X3650 M5, that I’ve put 2 Tesla M10’s in. Surprisingly these ancient cards perform really well with Ollama. BigTony is my garage computer. All he really does is host some pdf’s, mainly manuals, as well as play spotify. Cnidaria is my print server. Klipper + Mainsail is what I run to handle my Ender 5. Magpie is this shitty old laptop and amoeba is my workstation with decent but still ancient specs.

What I’ve realized is that I can use each of these computers for different purposes rather than trying to handle everything on one device. For example, Cnidaria doesn’t really have a hard job It’s a small NUC-style device that I picked up on the cheap, but it uses almost no power, and happily sits running forever.

cnidariaFastfetch

Through magpie being such a strong SSH client, it means that I can access my dev machine (amoeba) and use claude, emacs, nano, and whatever else directly. Using the -X flag on SSH, I can even pipe UI apps through. Tentatively, working with Clion and IDEA does work, albeit a little slow. You could also connect VScode via SSH to your main dev machine. The options truly are limitless, and SSH collapses your network into one tidy ball that you can make work together.

Now, what becomes interesting is when you bring some LLM’s into the action. Managing all these computers can be a real pain, right? Well, with claude, that’s not the case, when you want to set something up, or just manage the network, you can just point claude at it, and tell it to do it. To demonstrate my point, here is claude installing updates on cnidaria:

claudeUpdates

But that’s just the start of it. Remember those ancient Nvidia M10’s I talked about? Well, they only support compute level 5. Pytorch dropped support for them in their packaged versions way back in 2020 or so. You can still go through and follow all the steps to build it… or you can just ask claude to do it, and let it figure it out. I did, and it actually worked.

Another example that I have: Those nvidia cards don’t interface nicely with the IMM2, and when they get hot, the server doesn’t actually spin up the fans. I’m sure there is a way around that, but, I did what any self-respecting viber does, and asked claude to solve it. I gave it a starting place: an IPMI fan control script, and let it loose. It wrote a systemd service that monitors nvidia-smi, gets the highest core temperature, gets the CPU temperature from IPMItools, and then adjusts the fan based on the temperature, and it works great!

Here is the code: M3650 fan control

You might be saying: “Oh none of those things are difficult” or “Why not just use x,y,z shell command” Well, you’re right; however these problems are the perfect class of problem for an LLM. They are closed ended, they are well defined, they are easily testible, and they are common so they would surely exist in the training data. Now, am I saying that you shouldn’t learn to administer your own machines? Absolutely not. However, if you know WHAT you want to do, and have a vague idea of how to accomplish it, you can give tho se bits to claude and it is smart enough to just figure the rest out. In the before-times, you would have had to invest the time spending hours and hours googling around to find a half-solution, and then scraping it together yourself. Now, you can just push a button, tell it what you want to do, and then basically have someone hold your hand through the whole process.

On a recent episode of FOSS Weekly, Bill Shotts said something to the affect of: I don’t really like using AI, because you don’t learn anything. I disagree, I think this can be an extremely powerful tool for learning how to accomplish a task. Yes, you need to know how to drive the bus a little bit, but watching the commands that Claude is using to accomplish a task has taught me quite a few things that I wouldn’t have learned unless I saw someone do it: Things like, copying headers from md files by using head and a pipe: Seems obvious in retrospect, but connecting that dot in my brain wouldn’t happen unless someone showed me it.

You know, since chatGPT dropped, I’ve been a real downer on LLM’s. I really didn’t like how they were being sold as “human replacements”, and I still don’t. I don’t think they’ll ever reach the point of fully replacing workers outright, but what has really surprised me is how the tools have matured over the last few years. For a developer, it feels like such a large jump I don’t know how I could go back to raw-dogging work. It’s not that I CAN’T, it’s that I wouldn’t want to. Just like how you CAN write enterprise code in nano or vim (I’m actually writing this post in nano right now -_-), it’s just that I wouldn’t want to. The modern tools are much easier, and make me more productive. For me, at least, it’s removed a lot of the frustrating and tedious part of software engineering, and has allowed me to focus on the parts that I do like, which also happen to be the parts that LLM’s struggle with, because they require intuition that language models can’t encapsulate.

Anyhow, that’s all I’ve got for today. I don’t want to keep rambling. If anyone out there is actually reading this, shoot me a message over on bluesky!