Downtime is the enemy of any homelab—mine particularly. So, I lastly determined to do one thing about it and made my homelab extremely out there. I solely was ready to try this as a result of I already knew about it, so, if you happen to’ve by no means heard of excessive availability, right here’s why you need to learn about this distinctive trick.
There’s at all times upkeep to do in a homelab
And I at all times push upkeep off to have as little downtime as potential
Credit score: Patrick Campanale / How-To Geek
Once I first began homelabbing, I used to be performing upkeep on my server very often. From RAM adjustments to fixing working system points, swapping storage, or putting in new {hardware}, my server was down greater than it was on-line.
Having my server be down a lot early on made me hesitant to host essential providers by myself {hardware}, so I initially pushed these ideas down the street till issues began to settle. Finally, I acquired within the groove with my servers and upkeep turned one thing that didn’t occur practically as typically.
The issue is, there nonetheless is at all times one thing to take care of in a homelab. It may very well be relocating servers, placing in a brand new networking card, including extra RAM, putting in a graphics card, or just altering the IP in your community. Heck, even working system or safety updates on the server rely as upkeep—and I virtually at all times push my upkeep off for so long as I can.
Why do I push my upkeep off? As a result of it nonetheless means downtime, and each I and the others in my family have come to depend on the server’s uptime. So, I’ve to schedule upkeep and downtime when there’s no one on the server, and that’s only a trouble—so I lastly deployed a excessive availability cluster in my homelab to resolve the issue.
Excessive availability makes upkeep seamless
Providers routinely transfer to the following out there node
Credit score: Patrick Campanale / How-To Geek
If you happen to’ve by no means heard of excessive availability, it’s the one trick that each homelabber ought to at the very least learn about. Basically, you need to have three or extra servers (it really works finest with an odd variety of servers) which can be joined collectively in a cluster. These servers must have one central storage location that all of them share, and a NAS works good for that.
It’s finest to distribute the providers that you just self-host throughout all of the nodes in order that means nobody node is working every little thing—that defeats the aim of excessive availability. Each time one node goes offline, the providers that had been working on that node merely get spun up on one other node within the stack.
This occurs via a course of referred to as quorum. Mainly, when a system goes offline, the opposite methods within the cluster will “vote” to see who will get the providers that also have to be on-line. Then, that digital machine or container which is now not accessible as its host is offline, goes on-line on whichever node received the vote.
Associated
You in all probability do not want a NAS: Why a DAS is best for most individuals
Not offered on a NAS? Get a DAS as a substitute
Finally, when the node you’re doing upkeep on comes again on-line, the digital machines or containers that had been on it are migrated again and nothing misses a beat.
Relying in your {hardware} (and what working methods or providers you’re working), downtime right here could be wherever from a couple of seconds to a minute. Mainly, nevertheless lengthy it takes the digital machine or container in addition up.
Excessive availability doesn’t actually kick in for easy issues like a VM reboot, but it surely’s good for when it’s worthwhile to swap {hardware} or if you happen to’re shifting the situation of your homelab from one space to a different.
Your homelab acts as one huge server, however there’s one huge catch
Not each service must be made extremely out there
With excessive availability, your homelab basically acts as one huge system that simply passes digital machines or containers forwards and backwards. Nevertheless, it’s not with out its faults.
I run Plex in my homelab, and that’s one service that I received’t make extremely out there. Whereas it might be the proper service to have extremely out there, it simply doesn’t work effectively when in an HA cluster.
Plex depends closely on metadata and {hardware} transcoding. As such, it’s continuously writing or rewriting information, and it wants devoted {hardware} handed via to it.
Whereas potential, it may be fairly tough to arrange PCIe passthrough of a graphics card (both inside graphics or devoted) to a digital machine and have that very same {hardware} be out there on one other system.
Associated
Pondering of Beginning a Homelab? You Want a NAS
No homelab is full with out one.
Let’s say you might have three outdated workplace PCs which all have barely completely different specs and generations of processors. The passthrough {hardware} IDs of the built-in graphics of these PCs goes to be completely different, which makes the Plex and VM configuration exhausting to have extremely out there.
Plus, the Plex Docker configuration can typically require UUIDs of {hardware} to be handed via from the host to work, and even configured within the Plex settings UI. Each of this stuff make excessive availability setups fairly tough to configure.
Nevertheless, excessive availability is ideal for providers that don’t require {hardware} passthrough. Assume Audiobookshelf, Pi-hole, FreshRSS, Minecraft servers, web sites, and extra—mainly something that doesn’t depend on devoted {hardware} being handed via to a VM, after which to a container.
Model
ACEMAGIC
CPU
i7-14650HX
The ACEMAGIC M5 mini PC is ideal for setups that want a high-performance desktop with a small footprint. It boasts the Intel i7-14650HX 16-core 24-thread processor and 32GB DDR4 RAM (which is upgradable to 64GB). The pre-installed 1TB NVMe drive could be swapped out for a bigger one although, and there is a second NVMe slot for additional storage if wanted.
Model
KAMURI
CPU
i5-14450HX
The KAMRUI Hyper H2 Mini PC options an Intel Core i5-14450HX 10-core 16-thread processor and 16GB of DDR4 RAM. The included 512GB NVMe SSD comes with Home windows 11 pre-installed so the system is able to exit of the field.
Model
GEEKOM
CPU
AMD Ryzen 5 7430U
Graphics
AMD Vega 7
Reminiscence
16GB DDR4 SO-DIMM
Storage
512GB NVMe (expandable)
The GEEKOM A5 mini PC packs 16GB of user-replaceable RAM, a user-swappable NVMe SSD, plus two different storage slots, supplying you with loads of user-upgradability on this compact system. The Ryzen 5 processor packs loads of energy for normal duties, and it is even nice at light-weight gaming and CAD work too.
Excessive availability isn’t for everybody, but it surely’s price figuring out about
Operating a extremely out there setup in a homelab isn’t for the faint of coronary heart. You really want to have at the very least three comparable computer systems that you just plan to maintain on 24/7/365 for it to work effectively. This can be a bit out of attain for these simply getting began in homelabbing, and that’s okay.
I ran my homelab with out excessive availability for over half a decade earlier than I lastly had the {hardware} to get a 3 node cluster on-line. Even then, I didn’t have each digital machine arrange for top availability, solely those that I actually couldn’t stand happening.
You may not deploy excessive availability in your homelab proper now, however you undoubtedly ought to learn about it and at the very least have it in your again pocket for once you do have a setup able to dealing with it.

