Rack Equipment
After years of use out of an Ikea Rast rack, two side tables braced together with rails added, I am finally moving to an actual server rack to house my lab environment/production network. The rack is a 22U Startech knockdown rack. 22U 36in Knock-Down Server Rack Cabinet - Server-Racks | Server Management | StarTech.com I selected this one for a few key reasons:
- I currently already have 14U of equipment on/in my rack that will need a place to live and want room for expansion.
- As I have a full size server chassis, I needed a deeper rack to accommodate 30" deep equipment.
- Free Amazon shipping
- Most importantly, it was not preassembled allowing it to more "easily" be moved into my 2nd floor apartment.
In addition to the rack itself, several purchases were made to move the equipment into it.
- 2x 1U shelves for my Synology DS1620+ and supporting cables
- 0U vertical cable management rings
- M6 cage nuts
- While I hate cage nuts they are better than pre-threaded racks. I looked into the option of both the /dev/mount and rackstuds from patchbox. The /dev/mount looks really good but is limited to only 1U devices so some of my equipment would still need another method. The rackstuds were my 2nd choice but the version purchased is dependent on the thickness of the server frame and this isn't on the spec sheet. I am impulsive and couldn't wait until it came in to pull out calipers and measure. Also both of these options were 2-3x the cost so that was a factor. Also discovered were clik-nuts which adds a mechanism to squeeze the cagenut into place without ruining your knuckles, these were far more costly and came in a higher quantity than I will use.
- RJ45 coupler keystones
- Velcro for cable management
- Univeral rails for my server (no factory rails were bought with it)
Migrating to the rack will be all of my current equipment for my networking, server, and power equipment. This includes a 1U bracket with 4 Raspberry Pis, my Unifi equipment (UDM Pro, 48 Pro PoE switch, PDU, and an AP-6-LR), a patch panel with keystones for all of the cables from the back of the equipment to make the front presentable (and 4 HDMI ports for the pis), a Wattbox UPS, and a Dell R710 server running Hyper-V. Most of the configuration will be determined when the rack is built to allow for easier access to the back of the equipment when needed.
General best practices I have learned through the years is make sure your equipment is properly aligned. This rack has the U marked off but some only have the holes. I have been very frustrated when I go to do a simple install and have to move 3-4 pieces of equipment because someone forced into the wrong position. The heaviest equipment should go on the bottom - typically the UPS and then the servers. Cable management is super important - not just so it looks presentable but it makes documentation and troubleshooting so much easier (and usually results in it getting done). Access points should not be put in the steel box, on top hasn't resulted in any problems.
Currently the Synology and AP sit on top of the "rack" with the gear in the following positions
|
Raspberry Pis |
| UDM Pro |
| Patch Panel |
| 48-Port PoE Switch |
| PDU Pro (2U) |
| PDU Pro (2U) |
|
Shelf space |
| Shelf space |
| Shelf space |
| Shelf space |
|
Empty |
| Empty |
| Dell R710 (2U) |
| Dell R710 (2U) |
| Wattbox (2U) |
| Wattbox (2U) |
Overall I'm happy with the setup except the patch panel and the RPi's. Being sandwiched between other items makes getting to the back challenging. I've had to remove the rack screws and pull out the brackets to get access to the equipment. I don't change out cables and/or SD cards often but with easier access I'll be motivated to do more if I don't have to grab a screwdriver.