Although I have previously documented my homelab, there have been some changes as well as a housemove. This is part 1 of my 2023 update.
This is how it was around a week ago, and as you can see it’s a bit of a mess. The Lack ‘Rack’ was only tempoary and I’ve had it nearly a year. The cupboard is (very!) small, and as a result I am limited on space and heat management.
I wanted to tidy it up and make better use of the space, as well as get my NAS off the floor to stock it acting as a vacuum for dust!
Here’s a breakdown of the kit I have, a BOM if you will.
Ubiquiti UDM SE
This acts as my main firewall/router as well as the NVR for my UniFi cameras. The 2.5 GbE port is connected to an Openreach ONT, which provides 900/100 Mbps FTTP from Aquiss. I have a /29 IPv4 range, and a /56 IPv6 range. Static is useful to me as I run Site to Site VPNs to cloud services, mainly as an extension to my homelab, but also for testing purposes for different projects I end up involved in. Also, as I do a lot of cloud stuff, my static IPs can be added to firewall rules. I run several VLAN;, the main network, an IoT network, camera network, homelab network etc. All with firewall rules to my liking, mainly to stop those pesky IoT device communicating with anything other than what is required for them to function.
UniFi US-24-Pro (not yet connected hence not in the picture)
One 10Gb uplink is connected to the UDM SE. I got this as whilst the UDM SE does have 8 built in switchports, it’s not a proper switch per se, and as such it lacks some features, particularly support for RSTP or even STP. If you’re not in the habbit of creating network loops, or using LACP between switches and/or hosts, this ordinarily isn’t an issue. However, if you use Sonos equipment (which I do), or Sky Q (which I also currently do) then they form their own meshes and it can wreak havoc as it seems neither vendor is particularly interested in supporting STP/RSTP. For any hardwired Sonos device on a UniFi switch, change the switch settings from RSTP to STP and lower the bridge priority, this should cure most issues. Also, if you route VLANs which most homelabbers will, the SE isn’t able to do it from my testing at 10 Gb speeds. Hence the this switch as the first link in the stack as it will route (with limitations) between VLANs. I also needed some ports and since PoE requirements are met elsewhere, when I saw this switch come up cheap on Facebook Marketplace I couldn’t resist. If only Ubiquiti implemented RSTP on a ‘Pro’ device, I might have worked something else out. Otherwise, the UDM SE has been rock solid.
UniFi US-16-XG
I use the remaining 10 Gb port on the US-24-Pro to give it connectivity into the rest of the network. This is my main lab switch and I’ve had it for a few years now. Another find on Facebook marketplace, it’s been central to my homeab since its purchase. I have two ESXi hosts using two links each, a third ESXi host using a single one which runs my Plex server, then two links to my NAS, and the final link to a switch in my loft. I’d love to replace it with a USW-Aggregation due to it being silent, but that’s limited to 8 ports and I’m using 8 currently, so it’s not really viable as I may upgrade the Plex server to have two NICs in the future, plus there’s no harm in having spare 10 Gb connectivity. There’s the USW-Aggregation Pro of course, but they are very pricey, and also have fans, so I will stick with it until it dies.
ESXi host #1 and #2 (top left and bottom left)
Originally a Dell Precision 3440 Small Form Factor with the following spec:
Intel® Xeon® W-1290 10 core CPU /w HT
128GB RAM (purchased second hand on eBay)
1x Samsung 256GB NVMe (as a boot device)
1x Samsung PM983 NVMe 980GB
1x Crucial Force MP510 NVME 1820GB
Dual Port Intel 10 Gb NIC (X520 I believe)
I lasted less than a year before I came to the realisation that the Dell SFF chassis cannot provide enough cooling for a 10 core ‘baby’ Xeon, the NIC, RAM, and NVMe drives. When loaded, I would have stability issues and the noise of the small fans when busy was too much. So the hunt began to transplant the CPU into something else. Dell motherboards are not really compatible with anything else but the case they came from, so I started hunting for a motherboard and came accross the Gigabyte W480 Vision D which supports the CPU and RAM, has lots of expansion, and 3x NVMe slots on the board underneath a huge heatsink. I grabbed a cheap one from eBay and found a refurbished one elsewhere. Couple of spare 120mm fans, a spare CPU cooler, and a spare PSU plus a cheap one from a friend and the transplant into a pair of Silverstone Fara R1’s was complete. They run silently and I have since had zero issues with stability even running VMware vSAN.
No, I couldn’t be bothered buying a full height PCI bracket for this particular host!
ESXi host #3 (on top of the Lack table)
Another cheap purchase locally, I love a bargain.
Intel i7-9700 (non-K)
B360m-k board
32GB RAM
Single Port 10Gb NIC
Spare 1TB NVMe I had laying around (yes, I’m a hoarder)
This runs my Plex server with the iGPU passed through for transcoding. It also runs a domain controller. When I run my lab up, I can use it to put a few more VMs on, it handles it all really well.
I have more networking equipment, but the rest is largely unrelated to my homelab apart from my NAS, so I’ll exclude it from this post.
Synology DS1821+
This is my NAS. I’ve swapped between QNAP and Synology for years, however I’m now firmly back in the Synology camp after some issues with my old QNAP, which their support were trying to blame on anything but their software, This box runs more than my lab, loads of containers, and if it actually had any transcoding abilities (I share my library with some users who cannot direct play) it would also host my Plex Server. It’s the centre of most services in the house, such as homebridge, home-assistant, Grafana etc. It’s also the backup repository for Veeam, and thanks to the upload speed I am lucky enough to have, it offloads the most important things into Synology C2. I also have a couple of LUNs for when I can’t use vSAN or I want to rebuild it.
vSphere wise, I’m running a DC as mentioned, vCenter, an NSX Edge, HCX, and lots of test VMs. I’m lucky to have access to a VMC on AWS envirnment, and leveraging Site to Site VPN, most resource hungry VMs such as NSX managers, vRLI, vRNI etc all run there, plus a second DC. Ultimately this means I can power down the Xeon hosts until I need them, and with recent energy price rises in the UK, it’s been very helpful!
In a bid to get the ‘lab cupboard’ sorted, I did a fair amount of research on racks and I’ve ordered something and hope to get everything into it soon. I’ll update in part two coming up soon with progress. In hindsight I should have ordered rack mount cases for the ESXi hosts but it’s too late for that. Maybe in the future if I spot any deals.
As ever, thanks for reading.
No RSTP? How odd.
My two USW Pro 24 PoE switches have it. Plus I even rely on the feature these days (not prior because it was a broken feature). The baby flex switches do not have RSTP, and neither does the UDM Pro, but everything else does. Aggregation, 16XG, even the US-8s have RSTP.
I’ve got twin armoured 10Gb fibre links between the house (Agg+Pro24) and shed (16XG+Pro24), plus a 1Gb copper backup connecting both Pro24s. Without RSTP this arrangement would create a loop. It doesn’t. It just works, and picks the 1Gb to block.
Hi Steve, I mean the UDM-SE doesn’t support RSTP. The USW-Pro-24 does absolutely support it.