Wednesday, January 31, 2018

Kubernetes at Home (Overview)

I like tinkering. It shows in my home setup. For a bit of background, I've had some form of media and file server on my home network since the early 2000s. In that time I've accumulated quite a number of services that I rely on, not including my own development stack. With the advent of containers, this made tinkering and management easy. I started out with a bash script to start things. Then I moved everything into a `docker-compose.yml` file.

Over holiday break last year I realized my `docker-compose.yml` file had fifteen entries and it took effort to scroll through and add or update entries. I can do better.

Hello beautiful.


Here's a quick glance at the services I'm running (minus custom apps):

Cacher
Grafana
Prometheus
Docker Registry
InfluxDB
Sabnzbd
Drone
Kibana
Transmission
Elasticsearch
MySQL

Emby
OpenHAB

Gogs
PiHole

The deciding factor was PiHole. If I'm running my own DNS, I don't want me patching a single box to cause my network's DNS to go down. On top of that, I'd be able to have custom DNS entries for my home network.

And because I have a sense of humor, it only made sense to have Gogs URL be a wonderful homage to my friends that play Dark Souls: git.gud

Take your medicine.

Where things stand now, I have a keepalived-vip cloud provider consuming a small (/29) portion of my network. Services route through nginx-ingress, and pods are mounting NFS through persistent volumes.

On top of all of this, the entire stack including deployment scripts are in git. Meaning to add a DNS entry and a new service, I add them to my infrastructure repo and push to master. That will kick off a drone job to update Kubernetes to the desired state. Once I have more time I'll be posting a scrubbed version of the git repo onto Github.

And a glance at the current state of things if you run `kubectl get svc` on the `default` namespace

External IPs are managed by Keepalived

Overkill? No. Look at my future goals:

  • Namespace things appropriately (media, development)
  • Mixed architecture cluster (ARM + x86)
  • Federated households
  • Containerize Kodi (not just headless)
  • Daemonset on Pi-nodes labeled "htpc"

While I wouldn't call this a series of posts, I want to cover why I run each of my services in the near future. 

Feel free to reach out if you have questions!

Saturday, January 13, 2018

Ping android

For the last couple months I've had some network instability at home. No, that's not a euphemism. Occasionally (and most annoyingly while playing Overwatch) I'll notice dropped packets and a ping that skyrockets from a solid 20ms to 300ms at a minimum. Name the theories and I've probably speculated the same, everything from an old router to water in the line to my house. It became a ritual to pop open a terminal window and `ping google.com` just to telegraph when I'd stop playing. The idea struck me to whip up an app to run on my phone to monitor my ping rather than keep a terminal window open.

A weekend project was born.

I recognize the humor in creating a mobile app to monitor the problem rather than addressing the problem, but option A takes you on a journey of learning and fun. Option B winds up costing money and once I dive down that rabbit hole I'll be spending a week developing grafana dashboards to monitor everything in the house.

I digress.

After installing Android Studio and setting up my environment, I set to the simple task of pinging Google and parsing results. However, some quick research showed my first problem: Android / Java doesn't natively understand the concept of an ICMP packet. My mind was blown. Well, that's fine, I can still shell out and parse ping's results that way. Unfortunately, my testing showed that the default interface ping uses is your cellular data and you *can't* force it to use wlan0 without being root.

Drat.

So I've now gone scouring for a decent Java library to handle pinging. It's 1AM on a Saturday, I have a cat on my wrists, send help.

Thursday, January 4, 2018

A Critical Look at MongoDB

I've been told none of this falls on me (and I agree), but I'm still left with a bad taste in my mouth.I'm torn on Mongo.

As a developer, using Mongo makes life easier. It's literally spin up and go with a container. But after that point, after the development process, things get tricky. As a sysadmin, it causes nothing but pain and suffering. This isn't new, Mongo has had a history of stability issues, but I'm getting ahead of myself.

Part of what I do is help run and maintain microservices on-demand for researchers. The caveat for them is that what we're running is a research system and, not only can it go down at any time, we do not back up data. Now, that's not to say we don't do our best, and we absolutely try to go the extra mile to help someone out. However in this case, we're left with an unhappy customer.

A node that these on-demand services resides on had a hardware failure (faulty CPU) and everything on said node died hard. Recovery happened as VMs were evacuated, however the damage to the Mongo instance had been done. Three of WiredTiger's metadata files (WiredTiger.wt, WiredTiger.turtle, SizeStorer.wt) were corrupted. Mongo would try to start, WiredTiger detects an invalid file and dies. The end.



Research had shown that you can actually cut a ticket with Mongo and they'll attempt to restore your files. However in our case things weren't able to be recovered (supposedly due to how our instance crashed.) I've now been left with informing said researcher that

A) They should have run replica sets
B) They should have taken backups
C) There may be a way to retrieve the data, but it will take forever

I've been told none of this falls on me (and I agree), but I'm still left with a bad taste in my mouth. Look at other data stores (SQL and otherwise) and you don't see something that corrupts an entire instance with a hard-reboot. You don't see a community that thinks it's okay that the correct solution is "Run in a replica-set, take frequent backups." They can and should do better. Now that my sysadmin rant is done, I'm gonna go work on this side project I have that's using Mongo.



I'm a hypocrite.

Thursday, May 28, 2015

HTTP Request isolation using docker

About a year ago a colleague of mine and I were talking about the implications of Docker after he returned from VMWorld. One idea discussed was having a container spin up to handle a request, then destroy itself on completion.

There are some inherit security benefits to this in that each request is within its own garden. One of the downsides is how grossly inefficient it is. Either way it sounded like a fun project to hack at on a rainy day.

Almost a year later the rainy day finally happened.

https://github.com/jeefy/strobe

I wouldn't even call this in "alpha" stage. As a proof of concept it is a nice launching point. In the future I'm hoping to put time into expanding it's flexibility. Being able to dynamically change what container runs, and instead of per-request, handle per-session.

Docker continues to amaze, and DockerCon 2015 should be a fantastic experience.

Monday, May 6, 2013

Google Glass Personalized Medicine

I've been thinking about the potential Google Glass has in medicine. My primary focus has been its role in a physician's daily routine. One idea (though morbid) would be a pathologist performing autopsies using Glass to record the procedure and take still shots. Another would be allowing general practice doctors to both record encounters as well as manage short bursts of patient data without having to interfere with patient interaction.

This sort of workflow could greatly improve care in any number of ways. That was on my mind when I  started imagining what I'd like to be able to do if I had Glass.

"Okay Glass, diagnose cough... *cough*" -- Return list of possibilities with percentage matches

"Okay Glass, what's my heart rate?" -- Pull data from a heart rate monitor

"Okay Glass, what's my blood sugar?" -- Pull data from a glucose monitor

--Glass Notification Pushed-- Blood Sugar Low

-- Glass Notification Pushed-- Arrhythmia Detected

The potential for this level of preventative care and getting this sort of health information easily is staggering. And developing these services is made easy with the Mirror API. I know what I'm doing in my free time.


Wednesday, March 27, 2013

p2pefs - Peer to Peer Encrypted File System

I've finally started writing one of the ideas I've had mulling around in my noggin for years.


What does it do?
It allows people to back up files securely and privately across a network of peers. Peers may specify the max amount of storage they will allow. All the peers receive and store are pre-encrypted chunks of a file. On receipt they encrypt it a second time with a random-generated key and send the key back to the file's source.

Any peer can retrieve any file from the network with:
1) The file's generated uuid
2) The chunk's host key
3) The origin's key

What's next?
My current goal is to build a functional alpha build and test it with several people. I whipped up a utility that encrypts files accordingly, and I'm currently working on writing the tracker software. 

Why?
It seems like a neat idea. I feel more peer to peer applications are going to start emerging and though this idea might not stick, hopefully lessons may be learned as I try piecing this together. :)

Friday, March 15, 2013

IRC Bots for AFK Productivity!

    Recently I've been going crazy and hacking together random ideas for the Bitcoin and Litecoin community. The entire cryptocoin movement seems really awesome and is at a stage where the more services that offer / promote it, the more traction it will get. Among my creations have been a pay-for-download hosting site (flibbur), a means to purchase LiteCoins using USD (CoinFront) and a site that is clearly named, Cookies4Coins

    Both CoinFront and Cookies4Coins require manual action to process orders placed by users. The problem for me was, I don't trust my phone getting the email alerts to tell me "Hey bro, you have an action item for X" -- Enter sumdumbot. Sumdumbot has been a love hate project I've worked on for over five years. At its core it is one of the most basic IRC bots ever. It logs all links posted in the channel and you can go back and see who posted what when. It also does weather, has a random recipe generator, and can be the biggest pain in my ass. 

    Considering both Bob and myself spend more time in IRC than we do physically talking it only made sense to have sumdumbot alert the channel of our friends when something needed doing. It seemed like a neat novel approach to making sure things got done knowing who we are.

It dawned on me how far sumdumbot has come along though, when I realized all I'd have to do is have the store front code just send a JSON request to sumdumbot's webserver. An hour later and some elbow grease, Cookies4Coins and CoinFront orders now have sumdumbot alert me in IRC.