The Making of a Meta Server or “Why I Bought a Mac Mini as a NMS”

There’s a small office that I’ve done consistent work with for the last five years. I’m their contracted IT director on a part-time schedule. Anything that could even loosely be called “technology” is up to me to understand, explain, and make work. More than just a technology worker, I have to understand the business’s goals, mission and workflows first and foremost to then be able to profitably apply technology. I’ve submitted many board reports over the years and had to counsel the director and president on more technology related topics than I can ever hope to remember.

However, I’ve been a bad IT person. I haven’t proactively monitored their IT assets. I can make any number of excuses for myself, but none really mollifies me (nor would it satisfy anyone else with even just a hint of a desire to do a job right). An interesting fact is that some of my very first attempts at blogging (back in 2007 or 2008) were as a result of my attempts at making a monitoring system for this organization. That blog, a shared Drupal CMS between me and a friend, is long gone. However the nagging need for a thorough “Meta Server” has haunted me ever since.

The organization is suddenly expanding into a new field for them. They have high hopes and are tackling this new project head on. Their exact growth potential is unknown at the moment, but I want to put the long needed information reporting infrastructure in place now before much more moves forward. A few new websites will be made for the organization’s endeavors and both will be a large part of the success (or failure) of this new phase of growth. If I don’t know about their website / server / PC infrastructure health before I get panic-stricken phone calls, then I can consider myself a complete failure as a SysAdmin.

“What is a Meta Server?!”

A “Meta Server” is what I like to call any node whose sole purpose is to collect and display information that is largely of no interest to a standard user. You’re unlikely to see the term anywhere else because I just made it up (or if it is in use elsewhere, please, whoever came up with it, don’t sue me).

It’s the kind of thing that is where notes are stored, wikis are stood up, NMSs sweep, trends are graphed and bitknobs are virtually twiddled. In larger places, you might have a “Meta Rack” but I’ve never worked in an environment so large as to need stacks of meta servers. Oh if only…

My intention for this server is not to be just an NMS, in spite of the title of this blog post. It’s much more than that. Let’s take a look at what my plans are for it.

The Software

My plans for the meta server are many. I want monitoring, alerting, trending, help desk, asset/inventory management, log collection, imaging, perhaps a wiki… lots of stuff. Since it’s a small office with little need for a multitude of servers I can’t separate these roles out onto difference pieces of hardware, nor do I really need to unless there’s some glaring incompatibility between packages. Even if there was some kind of package incompatibility between tools, I’d prefer to just use Linux-VServer or something similar to stand up a virtual instance.

To expand on the list of topics just above, I’m very interested in the health of the network itself as well as the nodes on the network. I want to scan the network for devices and get alerts when new things show up. I want to poll each device, PC, laptop, printer, server, WAP, modem, switch… you name it, for vital statistics. That can be through SNMP, netflow, sflow, or an agent installed onto the operating system (in the case of a PC with a full OS on it). I want speed and latency statistics for our ISP connection too. This bundle of requirements necessitates probably three, maybe four separate tools.

I haven’t settled 100% on the applications that I’ll be using, but I have a pretty good idea that either OpenNMS or Pandora FMS will be the main monitoring and alerting system. For prettier graphs to look at, Observium is high on my list. I might use ntop for netflow analysis, rTPL for scheduled throughput tests and smokeping for latency monitoring. Munin may play a part as well; I haven’t decided yet.

For log collection, I’m interested in Splunk‘s community edition, but graylog2 is appealing as well. I’d probably use the Snare Agent for Windows to collect logs from my Windows hosts and send it all to graylog2 – if that’s the direction I go. However, Alien Vault’s OSSIM is also in the running.

For a help desk, I’m almost 100% sold on RT. I’m currently using SpiceWorks‘s help desk on a Windows server, but that’s a bit heavy for my needs. I don’t use most of its other management tools. It’s asset management and network monitoring is… okay. It’s a bit rigid for my tastes, however.

I’m also interested in having a simple PXE boot imaging tool on the network. I have long been a fan of the FOG project. This goal of mine isn’t to start creating an extensive image library. Instead, I just want to take the occasional quick image of a PC before a major change and also to be able to boot a PC over the network onto an anti-virus image to perform an offline virus scan. I’ll need some decent storage space to keep a few images around depending on which user’s PC may need to be quickly backed up. A few hundred GBs would be nice.

I’m considering the use of Monit for some automated response, but since I don’t have many *nix devices to contend with, that might be wasted effort. Then again, automating things is never wasted effort!

There might be a documentation wiki thrown in there for good measure. Currently I use the hosted wiki service of Zoho, but I am considering moving it in-house.

It has occurred to me that such a small device may be a prime target for theft so I’m considering volume encryption to protect the data. If someone wants the hardware bad enough, they can have it. However, I don’t want them to have any valuable data to play with. I’m sure it’s very unlikely that a smash-and-grab thief would have the interest or skills to do much with the data, but… still.

The Hardware

First, let’s clear something up. As often as I refer to this needed device as a “server” it is not, in fact, a “server” by any enterprise understanding of that word. It is just a device that serves, but is not intended to be made up of components that are traditionally thought of as “server grade.”

Since it’s a small office, it doesn’t have a proper server rack. It has more of a closet than anything. The closet is better than the “Just set that server down next to my desk; it’ll be fine, my door locks!” situation I found myself in years ago. The device needs to be small.

I’ve been collecting a large list of small form factor PCs for quite some time now. I like small things, especially when the alternative is cramming a 6 year old workstation next to the building’s demarc point to act as a caching proxy (I bet you’ve done that). I’ve got to narrow the market of small PCs down to a manageable pool using some base requirements.

My goals for the hardware are the following:

  1. I don’t want a development board. Things like the RaspberryPi or the Hawkboard won’t cut it. I want a production piece of equipment that is manufactured in decent quantities and is intended for a broader consumer base than hardware hackers and developers. The BeagleBoard is on the edge of that grouping because of its wide acceptance, but it’s still iffy. I’m sure some will say “Oh but the Hawkboard…” or “Hey, the BeagleBoard does…” and that’s fine. It’s just that in my research, they don’t seem to be dead ringers for well supported, hard-working devices.
  2. At least 250GB of internal storage. I don’t want to deal with CF cards like many micro-ITX boxes use. They don’t have enough storage and regardless of how advanced the wear leveling algorithms are, I don’t want to worry about block wear on a device that already has precious little space. NAND flash is also prone to soft read errors that makes you reliant on the ECC of the card itself. That’s too many variables for my comfort at this moment. Also, I want the storage to be internal as a matter of preference. I’m trying to get away from a “Just attach another USB hard drive!” mentality. That and I just don’t like USB buses as a rule.
  3. At least 2GB of RAM. I’m going to have a lot of daemons running and I’d prefer to take a RAM ceiling out of the equation.
  4. Price: I want to keep it around the $500 mark. If it can happen for less that’s great.

Over the years of keeping this project on the backburner and also considering building other similar meta servers, I’ve looked at NetGate cases, the Fit-PC, BeagleBoard-based systems, SlimPRO, Pearl D series, and various models of thin clients and plug PCs. I do have a soft spot for plug PCs, but none of them meet the above criterion.

The Final Choice

After searching far and wide, the one PC that kept coming to the top of the pack was the Fit-PC. I’ve had my eye on it for years and watched as its design have iterated past version 1, through version 2 and now on to version 3. It’s a handy little thing (form-factor pun intended) that has some decent resources. The latest version can include a 250 GB platter hard drive with 2GB of RAM and a 1GHz APU G-T40N processor. The price is a tad steep at $480 plus VAT and shipping.

I mulled the option. After shipping and tax, it would be over $500. Could I really justify that much money for what I was getting? Certainly the value of what I was going to do with it was worth it. I just… I wasn’t sure.

Then it hit me. A Mac Mini. The cheapest brand-new Mac Mini one can find (legitimately) is about $569 USD. And what does one of those shiny suckers have in it? 4GB of RAM, a 500GB hard drive and an Intel i5 processor. Furthermore, I can get it on Amazon.com straight from Apple’s store with no shipping fees and no sales tax. It has twice the RAM (2GB would probably suffice, but more is always better!), twice the hard drive space (very much useful considering the imaging server portion of the project) and an Intel i5 (could come in handy for encryption and report generating). Furthermore, love or hate Apple, their hardware is rock solid (iPhone recpetion issues notwithstanding).

For just a handful of dollars more, I get double the resources (more than double if you consider the CPU). I had been approved for a $500 purchase, but with just one five minute phone call later a $569 Mac Mini had been approved.

As for the OS, I am highly unlikely to be using OS X (highly). I’ll almost certainly put CentOS 6 on it and be on my merry way.

Retrospective

In talking with colleagues, I’ve taken a tiny bit of flak for making such an expensive NMS. Certainly, I think I could perhaps build a similar box for slightly cheaper, but without the i5. I’d likely need to use an Atom processor to keep the price down. However, the time for me to build the thing still costs my client money. Perhaps other solutions exist off-the-shelf with similar specs – but I wasn’t able to find them. Once again, research time costs.

In the end, I’m sitting here, a brand new Mac Mini still in its box next to me. I don’t have any regrets… yet. I’m eager to get this project going and hope to blog more about its progress, starting with the installation of CentOS on Apple hardware.

So what do you think? Did I blow it? Did I have other compelling options that I missed? Would you be happy with a Mac Mini “Meta Server?” Let me know in the comment below.

8 Comments

  1. Edmund White

    April 25, 2012 at 7:58 am

    You missed the HP ProLiant Microserver… Similarly-priced, works with Linux and has an out-of-band management option.

    Reply

    • Wesley David

      April 25, 2012 at 8:54 am

      I did, I totally missed it. I’d never heard of it until a week or two ago. I think one day after ordering the Mini. HP did a horrible job of marketing.

      After researching it, I think it’s the only out-of-the-box solution that would have been better. Ah well, I could have done worse.

      Reply

  2. tombull89

    April 25, 2012 at 8:05 am

    Any thoughts on why the HP MicroServer wasn’t an option? The latest N40L version has a 250GB HDD and a 1.5Ghz CPU, with 2GB ECC RAM as default, and, in the UK at least, is running with cashback offers to make it a lot more attractive. The money you save could be thrown at another stick of RAM or another, larger hard disk. There’s even an iLO card avalible if you wanted that in it.

    Reply

    • Wesley David

      April 25, 2012 at 8:54 am

      It was, I just totally missed it. Next time. Next time…

      Reply

  3. John

    April 25, 2012 at 10:50 am

    One alternative you might consider is running CentOS on top of VMWare Fusion or VirtualBox on the mini.

    That gives you the benefit of running Linux with the manageability of being able to take snapshots, back up the complete server image, run multiple OS’s at a time (or experiment with new services on a separate VM while the production VM stays up and running) remote into the Mac desktop, etc. Lots of flexibility that way.

    Reply

    • Wesley David

      April 25, 2012 at 11:04 am

      I considered it for a bit, but that seems so… heavy. The upswing would be that I get snapshots, like you said. It’s still on the table I suppose.

      Reply

  4. Patrick

    April 28, 2012 at 9:05 pm

    Rather than OpenNMS we have had quite a bit more luck with Zabbix. We were able to customize it a lot more and run all of our monitoring from there.

    Reply

  5. [...] a recent post named The Making of a Meta Server or “Why I Bought a Mac Mini as a NMS” I explained why I had chosen a brand new, 2012 Mac Mini as my NMS hardware. After two weeks of [...]

    Reply

Leave a Reply

Follow TheNubbyAdmin!

follow us in feedly

Raw RSS Feed:

Contact Me!

Want to hire me as a consultant? Have a job you think I might be interested in? Drop me a line:

Contact Me!

Subscribe via Email

Your email address is handled by Google FeedBurner and never spammed!

The Nubby Archives

Circle Me on Google+!

Photos from Flickr

Me on StackExchange

%d bloggers like this: