Wednesday, January 31, 2018

Kubernetes at Home (Overview)

I like tinkering. It shows in my home setup. For a bit of background, I've had some form of media and file server on my home network since the early 2000s. In that time I've accumulated quite a number of services that I rely on, not including my own development stack. With the advent of containers, this made tinkering and management easy. I started out with a bash script to start things. Then I moved everything into a `docker-compose.yml` file.

Over holiday break last year I realized my `docker-compose.yml` file had fifteen entries and it took effort to scroll through and add or update entries. I can do better.

Hello beautiful.

Here's a quick glance at the services I'm running (minus custom apps):

Docker Registry



The deciding factor was PiHole. If I'm running my own DNS, I don't want me patching a single box to cause my network's DNS to go down. On top of that, I'd be able to have custom DNS entries for my home network.

And because I have a sense of humor, it only made sense to have Gogs URL be a wonderful homage to my friends that play Dark Souls: git.gud

Take your medicine.

Where things stand now, I have a keepalived-vip cloud provider consuming a small (/29) portion of my network. Services route through nginx-ingress, and pods are mounting NFS through persistent volumes.

On top of all of this, the entire stack including deployment scripts are in git. Meaning to add a DNS entry and a new service, I add them to my infrastructure repo and push to master. That will kick off a drone job to update Kubernetes to the desired state. Once I have more time I'll be posting a scrubbed version of the git repo onto Github.

And a glance at the current state of things if you run `kubectl get svc` on the `default` namespace

External IPs are managed by Keepalived

Overkill? No. Look at my future goals:

  • Namespace things appropriately (media, development)
  • Mixed architecture cluster (ARM + x86)
  • Federated households
  • Containerize Kodi (not just headless)
  • Daemonset on Pi-nodes labeled "htpc"

While I wouldn't call this a series of posts, I want to cover why I run each of my services in the near future. 

Feel free to reach out if you have questions!

Saturday, January 13, 2018

Ping android

For the last couple months I've had some network instability at home. No, that's not a euphemism. Occasionally (and most annoyingly while playing Overwatch) I'll notice dropped packets and a ping that skyrockets from a solid 20ms to 300ms at a minimum. Name the theories and I've probably speculated the same, everything from an old router to water in the line to my house. It became a ritual to pop open a terminal window and `ping` just to telegraph when I'd stop playing. The idea struck me to whip up an app to run on my phone to monitor my ping rather than keep a terminal window open.

A weekend project was born.

I recognize the humor in creating a mobile app to monitor the problem rather than addressing the problem, but option A takes you on a journey of learning and fun. Option B winds up costing money and once I dive down that rabbit hole I'll be spending a week developing grafana dashboards to monitor everything in the house.

I digress.

After installing Android Studio and setting up my environment, I set to the simple task of pinging Google and parsing results. However, some quick research showed my first problem: Android / Java doesn't natively understand the concept of an ICMP packet. My mind was blown. Well, that's fine, I can still shell out and parse ping's results that way. Unfortunately, my testing showed that the default interface ping uses is your cellular data and you *can't* force it to use wlan0 without being root.


So I've now gone scouring for a decent Java library to handle pinging. It's 1AM on a Saturday, I have a cat on my wrists, send help.

Thursday, January 4, 2018

A Critical Look at MongoDB

I've been told none of this falls on me (and I agree), but I'm still left with a bad taste in my mouth.I'm torn on Mongo.

As a developer, using Mongo makes life easier. It's literally spin up and go with a container. But after that point, after the development process, things get tricky. As a sysadmin, it causes nothing but pain and suffering. This isn't new, Mongo has had a history of stability issues, but I'm getting ahead of myself.

Part of what I do is help run and maintain microservices on-demand for researchers. The caveat for them is that what we're running is a research system and, not only can it go down at any time, we do not back up data. Now, that's not to say we don't do our best, and we absolutely try to go the extra mile to help someone out. However in this case, we're left with an unhappy customer.

A node that these on-demand services resides on had a hardware failure (faulty CPU) and everything on said node died hard. Recovery happened as VMs were evacuated, however the damage to the Mongo instance had been done. Three of WiredTiger's metadata files (WiredTiger.wt, WiredTiger.turtle, SizeStorer.wt) were corrupted. Mongo would try to start, WiredTiger detects an invalid file and dies. The end.

Research had shown that you can actually cut a ticket with Mongo and they'll attempt to restore your files. However in our case things weren't able to be recovered (supposedly due to how our instance crashed.) I've now been left with informing said researcher that

A) They should have run replica sets
B) They should have taken backups
C) There may be a way to retrieve the data, but it will take forever

I've been told none of this falls on me (and I agree), but I'm still left with a bad taste in my mouth. Look at other data stores (SQL and otherwise) and you don't see something that corrupts an entire instance with a hard-reboot. You don't see a community that thinks it's okay that the correct solution is "Run in a replica-set, take frequent backups." They can and should do better. Now that my sysadmin rant is done, I'm gonna go work on this side project I have that's using Mongo.

I'm a hypocrite.