GOLFMK8
GOLFMK7
GOLFMK6
GOLFMKV

Anyone else into homelabbing?

smanierre

Autocross Champion
I've recently got some inspiration to finally make use of some of the equipment I got a while ago recently and basically tore down all my existing stuff to build it back up better. Current setup/future plans:

Router: VyOS running on a HP T620+. I chose VyOS over OPNsense since you can update everything via an HTTPS api which allows me to automate anything I need to whenever I deploy a new server. I have all my static IP addresses handled via the DHCP server so being able to just send it the mac address of the server and desired IP is much easier than in OPNsense where I had to manually update it whenever I wanted a new static lease. On the hardware side I chose the T620+ since it was pretty cheap on ebay (~$130) and came with a 4 port gigabit nic. One of the ports goes to my modem for WAN access, and the other 3 are used for the different VLANs i'm running. The second port is for my LAN VLAN which is where my hypervisor and any of the network infrastructure will live. The third port is dedicated to the CAM VLAN which will have all of the security cameras I eventually install. The last port is running two separate VLANs one for the guest network, and one for the IOT network. I have these 2 on the same port as they won't be using as much throughput as any of the other vlans will so they should be ok running on a single interface.

Main virtualization host: Dell R620 with 2 xeon cpus that have 8c/16t each and 128GB of RAM. It's running Esxi 6.7 with the trial license and I setup a cronjob that automatically renews it each day so I don't have to worry about it running out.

Current services:
DNS: I'm running a local DNS server so that I can resolve everything locally via hostname instead of needing to remember the ip addresses for each server individually. I'm running dnsmasq and it pulls entries from the /etc/hosts file to serve up for dns. I'm working on creating a little program that periodically checks the active leases the router has and updates the /etc/hosts file so that way I don't need to do it manually at all.

Wiki.js: I've learned my lesson from previous setups and at work that documentation is crucial, so I setup a little wiki where I can write down configurations, how to's, installation instructions etc. as I setup new stuff.

TP Link Omada controller: I have a TP Link EAP620 wireless access point that i'll be setting up soon so I have the controller software setup to manage that, and any future ones I also add in to improve coverage throughout the house.

Future services:
Jellyfin
Home Assistant
*arr suite
Some sort of dashboard
Prometheus monitoring with Grafana dashboard

Storage:
The R620 has 4x1.2TB hdds in it that are used as the pool for virtual machine storage.
I'm in the process of building a NAS that will have all my media storage and anything else that isn't part of the virtual machines. I have the CPU, motherboard, and 4x14TB hard drives. I'm slowly buying parts to finish it as they go on sale. The plan is one 10 gig sfp connection direct to the R620, and another 10 gig sfp connection to the main switch so multiple hosts can access data at gig speeds at the same time. Any new network runs in the house are going to be with CAT 6A so it will support 10 gig in the future.

Deployment strategy:
I'm attempting to automate as much as I can with deployments that way it's just one command the machine automatically gets deployed and configured properly. To facilitate this i'm using a few different tools:

Packer: I'm using packer to generate golden vm images that I can then clone into new machines whenever I need them. These are using a CentOS 8 Stream for the OS and I have 3 different sizes generated depending on what the application calls for. I install all base packages I need along with my ssh key so i'm able to login to them without a password. I also setup a cronjob to run ansible-pull which i'll talk about in the next section.

Ansible: Instead of the traditional ansible push style architecture, i'm using ansible-pull so I can have the machines auto update their configs from a remote git repo. This means once the machine is created, there's nothing I have to do manually and it will automatically install all the necessary software and configure everything to work. I have the machines run an ansible playbook based on it's hostname that way I can have the same cronjob for every machine and the software it gets is just based on it's hostname.

Terraform: To deploy the actual virtual machines i'm using terraform since that's what i'm familiar with from work. Having all my machines defined in code lets me easily see what should be running vs what is, and even if everything gets wiped out I can deploy them all out again in one command, and then ansible will take care of getting all the services up and running.

Remote Access: Fiber was brought into our neighborhood recently so I hopped on that with symmetrical 500MB speeds for the same price I was paying comcast for 60 Down 5 Up. The only downside is they use CGNAT so I can't expose any of my services to the internet directly. To get around this I set up a small instance in GCP running wireguard with a tunnel back to my router. This allows me to connect from my laptop wherever I have internet access and it is as if i'm on my lan directly. I can also easily switch to having all my internet traffic routed through my house when i'm remote too if I want which is nice. Secondly i'll have a reverse proxy on the GCP server that can then forward requests down the wireguard tunnel to be handled by whatever service is being requested.

I'm curious to see if anyone else has some setups they are interested in sharing or has any ideas in ways I could improve mine or new things to host.
 

smanierre

Autocross Champion

helushune

Ready to race!
It's nice to see someone running VyOS. It's a great project and I wish it had more traction. I use it in network simulations frequently and deployed it at my last job as a way to save on licensing costs. It's very capable and has a great development team behind it that are very active within their community. And it's actually open source, unlike pfsense.

I am to an extent. When I see the word "homelabber", images come to mind of hoarders of old servers with no direction and massive power bills on reddit. My home network is more like an enterprise environment rather than a typical homelab. I've been heavily involved in the open source world since running Slackware Linux on a 386. I prefer to hand-build everything as opposed to using a turnkey solution that might be rife with useless middleware and security holes.

My router is a PC-Engines apu2c4 running FreeBSD. It runs the typical router junk: pf, isc-dhcpd, BIND, allows uPNP to certain VLANs, Avahi mDNS reflector for certain VLANs, rtadvd ipv6 reflector, OpenVPN and WireGuard, chrony. I have two additional physical DNS servers, one additional apu2c4 and a raspberry pi 3, both with FreeBSD and Unbound relaying back to the router for local resolution or the root servers directly. There's also a NAS running FreeBSD and ZFS.

Home switching is mostly Juniper with some pre-HP Aruba and MikroTik. WiFi is Aruba; I primarily use Instant but also have an older 650 controller I use for testing. There's VLANs for switch management interfaces, Aruba wireless backend, server management, SAN communication, virtualization cluster management and dataplane traffic, gaming devices, guest wifi, untrusted devices, and a couple that only go straight to the internet.

Virtualization-wise, I have an Intel NUC running Fedora and GNS3 for network simulations. There's also a small 2-node oVirt cluster with a FreeBSD ZFS SAN backend that was more of an excuse to learn iSCSI and KVM/qemu. I mostly use it to keep my skills sharp and play with open source projects. The ones that stick are ElasticSearch/OpenDistro syslog capture, Icinga for monitoring, Graphite and Grafana for metrics, and Wazuh for SIEM. With what RedHat did to CentOS, I'm still trying to figure out what to do as I'm not a fan of Ubuntu and FreeBSD bhyve doesn't really have a cluster manager yet. OpenStack is overkill for what I would use it for.

Deployment/configuration, I utilize Ansible and SaltStack. Pushing something in to Git and having it auto deploy to Salt minions is a great thing.

I also have a couple VPSs, FreeBSD, that run my personal domains and were more for learning nginx, Knot, DNSSEC, DMARC, etc.

I am restoring a Sun UltraSPARC II machine and DEC Alpha, just to play with alternative architectures. I also have a MiSTer FPGA that can emulate some older PC/SPARC/ARM/Amiga/PowerPC machines at a hardware level that's fun to play with but has its own limitations.
 

smanierre

Autocross Champion
Yeah VyOS is great, I hadn't even heard of it until I started looking more in depth for a router setup that would work better for me. It's definitely got a steeper learning curve than something like PFSense or OPNSense, especially being all CLI based but I do think it allows a lot more flexibility and once you get a hang of configuring everything it's not that bad. Your setup definitely sounds like it's much more refined than mine. Unfortunately at work most of our stuff is serverless apps and security is a separate team from us so I don't get to play with all those tools. I am planning on tinkering with them on my own time though and getting some good monitoring and security stuff setup once I finish my NAS and get a good base and deployment workflow established.
 

helushune

Ready to race!
I grew up on the CLI and prefer it to GUIs; it's so much faster. The VyOS CLI is very similar to Juniper JunOS; I think they even mention in the docs somewhere they used JunOS as inspiration but I might be thinking of its parent project, Vyatta. You'd probably feel right at home on Juniper hardware. I love the commit model and think it's superior to doing things in real time. Using commit-confirm to easily test changes and automatically roll it back if there's a mistake somewhere without having to physically reboot the hardware is fantastic, especially if the hardware is remote.

Definitely check out Icinga 2 for monitoring. It's an open fork of Nagios, written in C, but still compatible with Nagios plugins so the extensibility is vast. The community is good, the devs are responsive, and the documentation is pretty good. The only thing I'm not impressed with is their Windows monitoring as it's kind of a mess of PowerShell scripts that don't really have the same functionality as the rest of its monitoring capabilities. I think there's a couple alternative Windows agents but it's been a while since I've been in the Microsoft world.

Something I've wanted to play around with is StackStorm and tie it in to Icinga to automatically run Ansible plays based on Icinga alerts to create a self-healing solution. It seems very powerful in what it can do and has a lot of integrations.
 

smanierre

Autocross Champion
I'll definitely look into Icinga 2, never heard of it before but it looks promising. Luckily I don't plan on running any windows machines so I shouldn't have to deal with that. StackStorm also sounds really interesting, I definitely have a lot of stuff to explore and experiment with.
 

Acadia18

Autocross Champion
What is everybody using for hard drives these days? I need a new one now that the 4tb in my server shit the bed. Looking at something 6tb - 8tb. While it would probably be the smarter idea, it's really not with it to try and set up a raid array for my stuff.
 

shovelhd

Autocross Champion
I don't homelab but at work we build our own white box computers that are installed in our machines. We use Seagate Barracuda hard drives exclusively. Some of our machines in the field are decades old that are still running the original hard drives. Seagate does not provide very good diagnostic tools, but their warranty service is the best. We do configure them in a RAID 1 (mirror) configuration, inside a three-slot hot swap cage. That way we can replace a failed drive without shutting the machine down.
 

smanierre

Autocross Champion
I have 4 WD easy stores sitting on the shelf I picked up last year on black friday. I'm gonna shuck them and toss them in my NAS once it's done. I got some 14TB ones for ~$200 each. Can't speak personally on performance but i've heard of a lot of people using them without any issue.
 

helushune

Ready to race!
What is everybody using for hard drives these days? I need a new one now that the 4tb in my server shit the bed. Looking at something 6tb - 8tb. While it would probably be the smarter idea, it's really not with it to try and set up a raid array for my stuff.
Spinning rust or SSD? Spinners, I'm partial to Western Digital and HGST. SSD/NVME, I tend to go with Samsung.
 
Top