The Trials and Tribulations of Hardware Ownership or: How I Learned to Stop Worrying and Love the Fan
It's been a long time since we've had a physical server here at Core dna. All of our client infrastructure is, of course, running on our Rackspace Hybrid Cloud system. The increase in scalability and redundancy is just too great to ignore. However, we recently came into custody of a Dell Poweredge which had too much hardware to just leave lying around. So we decided that we could certainly use an internal development server with lots of hardware to help make the development and support processes simpler and faster. There was just one problem: the fans on this thing would put a passenger jet to shame and could be heard with perfect clarity everywhere in the building.
Of course the server room in our building is located right next to our CEO and Finance team, and anybody who has spent any time in an office will know those aren’t great people to annoy, much less effectively stand next to with a hairdryer for the entire day. So now we had a problem, and couldn’t really back down after waxing lyrical about making people’s lives easier and so on, therefore something had to be done about the noise. It didn’t take much looking around to find that this was not an uncommon problem with Poweredge servers, and the most common resolution was to replace the stock fans in the server with higher spec models that were able to push more air at lower RPM. This is because the noise a fan generates is primarily affected by the speed it is rotating rather than the size of the fan, but the size and design of the fan can significantly improve the amount of air it moves at any given RPM. The interesting thing about this turned out to be that Dell fan connectors are proprietary so you actually need to cut and resolder the wiring on the fans to replace them. This gets even more interesting when you realise that the colouring of the wires is also different.
Luckily we were able to track down the factory spec sheet for the fans we used to find out which wires went where. What a difference that made - suddenly we could hear again! We actually kept at it at this point, chasing some additional noise reduction by hacking the firmware to lower the minimum speed that the server would allow the fans to spin. The best guide we were able to find on this process was here.
Now that the wrathful gaze of those just trying to get some work done in peace was no longer burning holes in our back we could actually work on getting the server to have some actual services. Now you generally want your development servers to reflect production as closely as possible, so that if you write some code there you can rely on it working the same way once you push it up to production, but there are a few things that it is good to do differently. The first one is of course that you can have a little more fun with your development machines, and give them a little personality. One of the best ways to do this is to have a play around with the messages that users receive on login. Some of the best tools to do this with are Figlet (writes things for you in fancy letters), Cowsay (gives you a little picture of a cow or various creates saying things to you) and Fortune (gives you a random quote, fortune etc). For any Linux users or aspiring sysadmins out there this is all quite easy to set up and there are plenty of tutorials available. Besides making them a bit friendlier, the other thing that we do differently on development machines has been preventing the system from sending email to external users. We’ve set our postfix on that machine to redirect any email it would send so that everything goes to our development mailbox. This means that there is no way any client is going to accidentally get spammed with testing mail from the server and it has saved us a lot of embarrassing moments in the past. This is a relatively simple setup that derives large benefits so it is really easy to recommend. The rest of the setup is fairly standard, and this is not really the forum to be getting into the details of our setup, so we’ll spare you.
The performance benefits of switching back to physical hardware for internal development have been staggering; some of our bigger functions run over ten times faster on that machine. Of course the complete lack of scalability is why we moved our production environments to Hybrid Cloud and they will be staying there, but the performance improvements and relative simplicity are refreshing to say the least. So while we certainly won’t be bringing our production gear back onto physical servers anytime in the foreseeable future, I think our new friend will be hanging around for some time.