Despite the name, there are actually three labs, depending on your perspective.
The first is my Linux lab. This is a single server in colocation running KVM as its hypervisor. Pretty much anything that needs to face the Internet will be done in this lab, including this site.
It’s mainly for the experience of having to manage something that’s truly remote. Truly is defined as not really possible for me to get to
$theThing without some great effort. Also, I would be able to say (and not lie) on a resume that I’ve managed
$thisThing in production on enterprise hardware hosted in colocation. Finally, being in colocation makes me have to be a bit more careful with some choices I make with whatever I’m working. I can’t be summoning Colocation America staff to set me up with some kind of IP-KVM all the time, because I’ve broken stuff.
From the cost perspective (in my case $75 / month) it initially doesn’t make sense to use colocation. Being in colocation makes having access to public IP addresses much easier. You could argue that I could have public-facing VMs using a VPS service; however, with the stuff I plan on doing (especially if I get NextCloud going), the storage needs will make VPS more expensive than I’m willing to pay. There are also some non-cost reasons why it did make sense for me. First, the server that’s in colocation is 1U rack server. In my two-bedroom apartment, that was pretty loud for the room where it lived. While I could deal with the noise, not having it there is nice. Also, my server is in a more secured environment physically. Methinks it’s easier to break into my apartment than the colo facility.
You have a single server in colocation with no router and no switch. How does the traffic flow?
That took a bit of planning, but here’s how it works. Like any good physical server, my little guy has a hypervisor (KVM) installed on the bare metal. The one NIC I have attached to the colo’s infrastructure is bridged to one of the virtual NICs that’s attached to a firewall VM. Thus, all traffic (including traffic destined for the hypervisor itself) is inspected by the firewall VM. From the firewall VM, if the traffic isn’t dropped, heads on to its destination. Getting back to the needing-to-be-a-bit-careful-since-I’m-in-colo, this is an example of how careful consideration was needed. From what I’ve learend, VMs in KVM don’t autostart by default, so before I shipped off my server, I had to test and make 100% sure that the firewall VM does auto-start and that it’s configuration was valid. How I tested the configuration will be another post sometime.
Coming up in part two will be my newly deployed on-premises lab: The Windows Lab.