Year: 2010

“Standards!!” I shrieked, hands partially extended in front of me, palms up and fingers bent and twisted like a long dead Joshua tree.

I was working the tech bench at a SMB near the beginning of my career. An old desktop PC that was on a losing streak with the second law of thermodynamics sat before me. I was trying to log in as the local administrator to perform some banal task that was the technological equivalent downing a box of Nytol.

“Let’s try this one…” I mumbled and clattered for a brief moment on the keyboard. “GAAAAHHH!” Once again I burned two eyeball sized holes in the ceiling as I turned my contorted face upwards in an expression that would have made Colin Clive petition for my Oscar nomination.

The IT department had attempted to put the same local admin password on all of our PCs. However, maintaining nearly 30 separate images for the various PC models that we accrued over the years insured that a few things would be nonstandard here and there. Also, there were long forgotten back-room and warehouse PCs that were never blessed with an image-based deployment and were instead set up as one-offs with who knows what settings, accounts and passwords.

“I seem to remember this one being used at one point…” Clickety-clack. “WHAT HIDEOUS MONSTER OF NON-CONFORMITY DEPLOYED THIS STUPID… ?!?” I growled through clenched teeth. I couldn’t make too much noise lest people passing the back room that we techs performed our rituals in get worried and call security. The IT department had a close relationship with security, however the officers couldn’t turn a blind eye to me going all Texas Chainsaw Massacre on company property. I took a deep breath and regained my composure.

There were several local administrator passwords that had been used in times past, and I had tried them all. I even tried the domain admin password hoping that someone had made a mistake and put the wrong password in. I even tried some generic passwords like ‘letmein’ and ‘password’ (hey, it had worked once or twice before). This time nothing was working.

I sat, head in hands, contemplating my next move. I could reimage the aging machine… that is if it even had an image that was made sometime after Lycos was the hip new way to search this World Wide Web thing. The computer had programs and data on it that were of indeterminate function and worth. I ‘d have to copy the data and hope I could reinstall the apps the proper way. This was turning into a nightmare.

There I sat, head in hands, a sad figure that was on the brink of being vanquished by my ancient foe. I tried to quiet the frustration by emptying  my mind of thoughts. I took a deep breath and composed myself in a peaceful blank. Nothingness.

Blank.

“Wait…” I looked up and stared at the Windows login box. “Administrator” as the username, the cursor strobed in the password box. It rhythmically taunted me in the blank input field. Blank.

Face frozen in an expression of distrust, but edged with hope… I tapped the enter key. I was greeted with the hopeful, breathy Windows login chime.

(Thanks to Richard Holloway‘s comment on my post “Before you Ask are you Ready for the Answer” which reminded me of this anecdote that happened several years back. You might be interested in Ed Bott’s article “It’s OK to use a Blank Password“.)

“So what’s your password?” I casually asked the CEO of a small business I was doing work for.

I don’t usually make a habit of asking people for their passwords, least of all members of the highest executive class. In fact, I hate it. I don’t want to know anyone’s password. I don’t want to ever be looped into the circle of suspicion should anything ever happen with a resource which that password gives access to. I even cringed the one time my mother had me type in the password for her I Can Has Cheezeburger account while I was trying to figure out some odd web browser problems of hers.

However, since this small business is owned by family friends whom I trust I decided to cave in for a little while. For a little over a year I had been attempting to impress some order on the technological maelstrom that existed, but I knew that I still had to pick my battles carefully.

The office network didn’t even have a dozen PCs on it.  They were all Windows machines and I was migrating them to a new Small Business Server 2008 Active Directory domain. This one machine that I was giving my attention to was a laptop that I hadn’t joined to the domain yet. It was its own little island of settings and preferences given to it by its user. I felt like I was sitting in front of a feral cat that could at any moment turn into a hissing chipper-shredder if I touched it wrong. I had some troubleshooting to do on it and I needed to know the user’s password for the time at hand.

“Ahhh…” I heard on the other end in response to my question. Then there was a brief pause. Silence. The hesitation surprised me. Did I detect some uncomfortableness in his voice? I think I did. Surely after years of passwords being freely divulged it wouldn’t suddenly be a taboo now.

Before I could think through the situation any more, the individual broke the silence. “Enn.”

I knit my eyebrows together as I hovered my fingers over the keyboard. “In? In what? ‘In the heat of the night?’, ‘In the nick of time?’, ‘In the jungle, the mighty jungle?’ ”

“No… just Enn. The letter ‘N’.”

My index finger twitched over the keyboard. The cursor blinked silently in the password field. I tapped ‘N’ on the keyboard.

“Capital N!” the person chimed in.

I pressed enter. The Windows login chime cheerily greeted me.

Question:

Is there such a thing as a free Experts Exchange account? One that doesn’t require a credit card? One that does not expire after a trial period?

Short Answer:

Yes! Yes you can get a free Experts Exchange account that does not require credit card information! Just click here, read all of the text to make sure you understand what you have to do (It’s simple! Really!) and then sign up by clicking the “I want to be an expert!” button at the bottom of the page!

Long Answer:

Most every IT professional is familiar with Experts Exchange. Started in 1996, it became one of the premier gathering places for IT pros to discuss their technical challenges and dispense their wisdom to others. It is also made from raw evil. Well, maybe that’s a bit of an overstatement.

The biggest problem I and many others have with Experts Exchange is that the site shows certain content to search engines (based on the user agent given) and other content to regular browsers. If you browse the website normally, all answers to questions are hidden. However, if you land on the Experts Exchange website through a search engine link, you can scroll to the bottom of the page and see the discussion and accepted solution(s).

The deception and filthy lies are what bothers me the most. Look at this graphic that is taken from Experts Exchange after clicking through a search engine link:

The verbiage in the text makes it seem that the only way to view the solution is to sign up for the service. Of course, you can have a 30-day free trial, but that requires credit card information. Look at another graphic from the page:

Again, the verbiage only suggests that a 30-day trial is what you must use to see the comments. What I find most disturbing is that the revealed comments in the thread including any accepted solutions are just a few pixels away. If you clicked through to Experts Exchange from a search engine URL, you can merely scroll down past the filthy lies and you will see all of the comments and accepted solutions at the bottom:

And Experts Exchange knows it too! What does a person see when they scroll down to see the full thread? This little floating gem graces the right side of the browser as soon as the actual comments of the thread appear:

It is true that these days (as of July 2010) Experts Exchange is a bit more forthright about advertising their free account. As you can see in the above graphic, the option to “Answer 3 questions/month” is displayed. Also, on the side of the site you see the three possible ways to sign up for an account where the bottommost option is “Answer for Membership”:

Not too long ago, you used to have to slog through their FAQ and find an obscure link to get to the free registration page.

Another way of seeing the hidden answers on Experts Exchange is to change the user agent that your browser displays to web sites. You can do that in Firefox using the User Agent Switcher extension.

The free account is not without a few little hoops that you must jump through. In order to have access to the answers on the site, you must earn at least 3,000 points per month. That’s not an unreasonable amount of points to gain, and it can be done by contributing three good answers. You can also earn a premium membership by earning 10,000 points in a month (supposedly about 7 questions answered).

If you’re willing to put a few minutes a month into maintaining your membership, then it could be worth it. Indeed, there is good content on Experts Exchange. However, if you feel compelled to contribute to Experts Exchange beyond the mandatory 3,000 points (or 10,000 if you want a premium membership), consider giving your time to sites that are not evil such as ServerFault, StackOverflow and SuperUser.

Fie on thee, deceitful Experts Exchange. Long live the StackExchange Trilogy!

Antivirus is the bane of a SysAdmin’s existence. Well, one of many banes; notable others being the engineering department and text messages from Nagios at 2:37 AM on a Saturday.

Over at the SysAdmin Network I was intrigued by one member’s comment on a thread concerning what the best enterprise antivirus software was. That member was Isaac Bush and his intriguing comment was that he had forgone the search for the best antivirus software because he had successfully dropped antivirus for his users’ PCs three years ago.

I’m here today with Isaac to interview him concerning his anti-antivirus project.

Wesley “Nonapeptide”: Starting off, who are you and what do you do?

Isaac Bush: My name is Isaac Bush and I’m the IT manager for the Georgia O’Keeffe Museum in Santa Fe.  The Museum’s IT department is quite small so in additional to the manager hat I also wear the lead sysadmin hat for our servers, network, storage, etc … so a typical small shop admin really.

Wesley “Nonapeptide”• Can you explain your workplace’s technology environment a little bit?

Isaac Bush: We have a little over 100 users and these are primarily knowledge workers. Like most companies our desktops are Windows based due to application requirements. We use Active Directory for our Windows machines and leverage AD functionality (GPO, managed software installs, etc …) to manage them.  We have around a dozen SQL backed line of business apps, and an Exchange 2003Office 2007 deployment for groupware. Typical stuff really.

We have approximately 20 PC desktopslaptops and a handful of Macs. The Museum is a little unusual in that we use Microsoft Terminal Services heavily to provide desktop sessions. Unlike many TS deployments we’re not using TS exclusively for task workers or kiosks. Instead our standard desktop for all users, including knowledge workers, is a TS session and over 70% of our users have thin clients.  Although we are using TS for these desktops currently, shortly we’re going to be transitioning the user base to VMware View.

Server side we’re a mix of Windows, Solaris, and Linux, heavier on Solaris and Windows.  We have 43 servers by my last count; it’s a rather high server to user ratio given that we are not an ASP. Most of these servers are dedicated to a single application or provide redundancy for important services, multiple domain controllers or multiple terminal services servers as examples.  The majority of our servers are VMs in a VMware vSphere cluster.

WN: What was your experience with antivirus while you were using it?

IB: When I came on board the company was using Symantec products for AV and anti-spam. Like most companies all of our Windows machines were running an AV client.  The reality was that managing AV, in and of itself, did not take exorbitant amounts of time. All client deployment and definition updates were done automatically so we only needed to deal with a single management application.

The real time sink for IT staff was dealing with cleanup when AV didn’t catch something. Wipe and reload always worked, but it cost time for our users and for IT.  I viewed this time cost as unfortunate, but not unusual since that was what I was used to during my whole career in IT. I had always deployed AV and I always had to reload machines due to malware from time to time.

In addition to malware clean up we also needed to manage the interaction of the AV clients with the rest of the system. For instance it was common to have to configure the AV client to exclude certain directories, or sometimes the AV client had to be disabled while we did updates to the OS or to apps. Other times we had problems with AV killing performance on machines. We had more than one user complain about their machine running very slowly and the problem was tracked down to AV. In these cases, although the problem clearly was the performance impact of the AV agent, we viewed AV as mandatory so it was just too bad for the user. We’ll buy you a faster machine next year.  And of course like any other piece of software we needed to keep it patched. There have been many ugly AV security flaws and because AV inherently runs highly privileged these were patches we really needed to jump on. Hello irony! Real irony, not Alanis Morissette irony.

WN: What inspired you to ditch your antivirus? Was there a major failure of the antivirus system? Was there an “Ah ha!” moment when you realized it would work or was it just common sense that you knew all along?

IB: At every company I’ve worked with it has been standard practice to give local admin rights to the end users. This avoided a lot of problems with applications that assumed admin rights and, to be honest, we just didn’t care what the end users did with their desktops; we cared about the server side but not the desktops.

As I mentioned earlier we had ongoing problems with our desktops picking up malware despite updated AV definitions. In particular there were a set of common use PCs that were constantly picking up infections, and almost every few weeks we were doing something to those machines. These machines were used by our staff, as opposed to random guests, so these desktops were configured the same as every other desktop which included giving local admin rights. Eventually we started locking these machines down and part of that included dropping admin rights. Once we started using non-admin accounts infections dropped off dramatically.

Honestly this is something I should have been doing to begin with. I’m from a Unix background originally and of course least privilege and not running as root are core concepts there. Moreover I was always careful about privileges for application service accounts on our Windows servers. For some reason I just didn’t apply that to Windows desktops, even though NT based operating systems have a significantly more advanced and fine grained ACL system then most flavors of Unix.  Just a blind spot from tradition I guess.

AV had not been able to keep those machines clean; it was dropping admin rights that did that. This experience really started to put AV in a bad light; it seemed very superfluous. After all, on our non-Windows machines we don’t run AV, we use proper security procedures. So why should Windows be any different? I did still see the value of server side AV for mail filtering. We’re filtering for spam of course, and we could deal with many viruses during that process as well. Even if malware from email would not be able to infect a machine, it would still fill the end user’s mailbox. Plus we could kill phishing emails which was very important as no desktop security model was going to stop that. For file servers we also saw the value as we have documents going back many years and from many sources and it seemed prudent to scan those periodically. It was the desktops, as opposed to the servers, where we were primarily interested in dropping AV.

WN: Did you have to present the idea to upper management? If so, did you get any pushback and have to convince them it was viable?

IB: I’m responsible for IT planning and implementation, so I didn’t need to formally present this to upper management for approval. That said, I certainty keep my management in the IT loop, and while I didn’t have any “hard” pushback there were concerns expressed.  However presenting the results I had seen in the pilot group, plus the expected cost saving, went far in assuring any concerns amongst senior management.

The real pushback came from some of our IT staff that were unhappy with the idea of removing AV from the desktops and laptops. Their view was that while it didn’t seem to help all that much, it didn’t hurt either. We should use defense in depth, multiple layers of security, etc, etc … I agree with the importance of multiple layers of security, but only if a layer seems to add to the overall security posture. Based on my experiences the client side AV layer seems to be ineffective at best and is rendered pointless by a restricted desktop security model. Moreover, AV comes with a cost, a cost that can be substantial in both time and money. Therefore, we moved ahead with dropping client side AV.

WN: How did you prepare to do this changeover? What were the challenges that you ran into?

IB: The main issue wasn’t so much that we would be dropping client side AV, but rather that we would be removing admin rights. The principal problem this caused was that a number of our applications would not work correctly without administrative rights.  In every case the problem was tracked down to applications wanting to write to areas of the registry or file system that are read only to non-admins. So, we needed to loosen permissions enough for these apps to run, but not loosen them enough to render the security model pointless. Thanks to the various sysinternals tools we were able to identify all the places in the file system and registry where these applications were needed additional access. Once we had that information we setup a GPO to alter the ACLs on the particular files and registry entries in question. Later we filtered this down with groups so that these changes would only be made to certain computers and users.


WN: How long did it take to fully implement this?

IB: I’d estimate roughly a month to 2 months after I decided to proceed and the majority of that time was spent testing things. The actual implementation only took an hour, if that. We created a few GPOs to adjust the file system and registry permissions and then ran a script after hours to remove domain users from the local admin group. One reboot later and it was done. This entire project could have been completed much faster if we hadn’t had other projects going on concurrently.


WN: What was the user response both while it was happening and after it was done? Were they annoyed that they’d lose admin privileges?

IB: For the most part the users were not really aware of the change in their privilege level as their normal work did not require admin rights. IT already handled maintenance of these machines so it was not as if they were used to running their own updates, or otherwise were responsible for administrating their own machines. We didn’t make an announcement to the effect that we were removing admin rights as, to be honest, it would have sounded negative. Instead, we stated that IT would perform any installs of hardware of software in the future. This was the policy anyway, so really it was more like a reminder. Later we did have some complaints from people that were used to installing whatever they felt like. In many of these cases we didn’t want the software installed at all, iTunes as an example, while in other cases we installed the software, and made sure that the software was part of our standard install in the future.

The most challenging group of users was actually IT. The IT department had a very bad habit of having their accounts be domain administrators. To address that serious problem we created new domain administrator accounts and dropped admin rights from our existing accounts. Unlike the rest of the users we needed admin rights in our day to day jobs so this was something of a hassle as it meant using  runas all the time which in turn made it tempting to just use the new admin accounts for everything. What IT staff ended up doing was logging into a management server over RDP with their admin account and leaving the session open, and that made it a lot easier to use the two accounts simultaneously.


WN: How do you handle software deployment and updates? I’m sure users want certain software titles now and then. Also, Adobe Reader gets patched roughly every 4 and a half hours.

IB: We had viewed AV as covering us until patches could be applied. Now that we were dropping AV we would have to be very aggressive about getting patches out. We have clients configured to install patches from WSUS quickly so getting Microsoft patches out was very straightforward. A bigger issue was dealing with possible application problems caused by patch/app incompatibilities. This wasn’t really an issue with Microsoft products, but a few of our 3rd party apps would sometimes break after patching; one vendor in particular has Q&A “issues” and their apps are “fragile” to put it mildly.

In order to address this we developed a structured testing methodology. Ideally we would have run an automated test suite of some sort, but that’s really beyond us as a smaller outfit. Instead we have testing VMs with different application loads that mirror the different configs we have deployed. We’ve developed a checklist of tests to run to verify that our apps are functioning correctly after patches. Due to Microsoft’s patch Tuesday policy we’ve been able to streamline this according to a regular monthly schedule.  Typically we’ll integrate 3red party vendor patches at that same time.

Patch priority really depends on the severity of the issue. Many patches are addressing issues that are not widely seen out in the wild at this time or relate to software that’s not publicly accessible and in those cases we scheduled them into the monthly patch install. Others are far more critical and we’ll push them through as soon as possible, ideally the same day as the patch being released or the day after. It all depends.


WN: How do you handle threats born from removable devices?

IB: Malware spread from removable devices is, essentially, little different from malware spread through other vectors. Assuming up to date patches, the worst case is that malware will be able to infect the user’s profile. In every case we have seen malware limited to infecting the profile as it lacks admin rights, again assuming everything is patched. However the truth of the matter is that once a machine has been infected, even if it is limited to the user’s profile, it cannot be fully trusted again. Therefore, we always reload the OS, and in order to follow that policy we have had to streamline our installation processes to minimize downtime.

The main issue that is specific to removable devices is tracking down the removable device in question, and that’s an issue we’d be dealing with regardless of whether we had AV on the desktops or not. Incidentally, the fact that most of our people are using TS sessions has minimized these issues; you’re not going to be plugging music players into your thin client.

WN: Are there exception users in the environment?

IB: Not users, but rather certain machines. A handful of our desktops are used for working with unusual hardware that requires admin rights for the interfacing software. In these cases we use “runas” to use those programs under a local admin account. It’s not the best solution and it wouldn’t be viable for every case, but it’s been sufficient for these few machines.


WN: Do you use client firewalls?

IB: Yes for our laptops, but no for the desktops. We’ve had a number of issues with remote management when the firewall is enabled. I see the real benefits of client side firewalls, and I’ve never been too thrilled that we don’t use them, but it just seemed to be the pragmatic solution. We have perimeter firewalls of course, and that has been viewed as our primary network defense. I’m well aware of the flaws in that model, that it makes our network hard on the outside but soft on the inside. Like candy! Hacker candy. But that is more of a conceptual problem, whereas issues relating to client side firewalls blocking required ports are real immediate problems. Of course you could simply punch holes in the client firewall, but if you open up all the important ports then how much value does the firewall even have?

Recently however I’ve been talking with a colleague at another company where they keep client side firewall on and open up whatever ports they need but only for connections from an internal management subnet. I like that idea a lot so we’ll be rolling that out later this year. It gives us the advantages of client side firewalls, but doesn’t turn the firewall into network Swiss cheese. I’m sure everyone else is doing something like that but I’m not always the quickest on the uptake.


WN: Have you heard of anyone else doing this?

IB: I’m not aware of anyone else that has actually removed client side AV like us, however I’m aware of many companies that do not give admin rights to end users. Microsoft is clearly pushing that model with their newer operating systems so I’m sure it’s only a matter of time before this is standard procedure at every company.


WN: Do you have visitors that come on the corporate network or do you have a sandbox network? How do you protect your clients from them?

IB: We routinely have visitors coming onto a network and, as a rule, we sandbox them. However, we don’t have any real controls on it because it just comes down to what VLAN the port is configured for. It’s hardly a hardened or robust environment, especially given our lack of client firewalls. We’re investigating what will be involved in implementing 802.1x , and it is virtually certain we will be using 802.1x in the future as it solves a number of problems we have. Ideally we would pair it with a NAC implementation.


WN: If you could do it over again, what would you do differently?

IB: We got caught by those hardware edge cases I mentioned, and we also ran into problems with certain low end multifunction printers. We didn’t catch these before we made changes and that caused disruptions for the users during the transition. So instead of just testing our common apps, I’d have tested this with our peripherals as well.


WN: What advice would you offer to anyone considering embarking on this same journey?

IB: We’re talking about removing AV here but I want to strongly emphasize that this wasn’t about removing client side AV and then simply leaving it at that. What this came down to was that we realized a proper desktop security model was more effective then client side AV and made the AV client unnecessary.

If you were are similar situation and wanted to move in this direction, I’d start by evaluating the end user environment first. The user base in some orgs not only has admin rights, they use those rights extensively. Can this be feasibly changed? Then the other principal consideration is your IT staff. Fast patching is absolutely vital, so the question is how busy are your people and how disciplined is your department. Of course patching is absolutely mandatory anyway, but I feel it’s more so in this arrangement. I’d also make sure that you have all your processes lined up beforehand. If wiping machines is SOP, then how fast can you get the machine back to its pre-infection state? Is it a matter of a quick scripted install or image load? Or do you need to break out a bunch of disks? Will the user need to recreate personal settings and preferences in their profile or will the profile just pull down from a server? How prepared are you for app testing? Do you have a testing environment already setup? You really don’t want to be dealing with things once you’ve already changed the install base.

Finito

There you have it. One man’s fight against antivirus software ended with him as the victor. Have you been fighting antivirus? You may want to consider Isaac’s methods for your environment. Do you have a similar story of ditching antivirus for good? I’d love to hear it.

Surprisingly, I’ve been asked more frequently if I’m on LinkedIn than if I “have a FaceBook” (a phrase that is the grammatical equivalent to scorpions in my pants). That’s probably because I tend to hang with people that are in my profession more than I do with the self-admiring crowd of vapid Generation-Y’ers that seem to equate a lack of FaceBook involvement as a sign that you’re letting the terrorists win.

Sorry to break it to everyone, but I don’t have a LinkedIn account either. And I don’t see a need for one yet. Especially because I now own my own business and don’t need it as a networking tool to find employment. That last sentence is probably terribly short-sighted and sounds a bit like some famous last words. Maybe I have a false sense of security, but I’m running the show now and I sign my own checks. I’m plenty friendly and have plenty of contacts through email, IM, Twitter and forums. I just don’t see a need for yet another means of online social interaction.

As far as finding magnificent employees, I’m sure my already considerable involvement and contacts can help me out. I just can’t justify adding another social networking bookmark with its attendant maintenance and potential for having to follow up with people contacting me through it. Overall, I’m skeptical about the cost to benefit ration for me.

So here they are. The top 10 reasons why I’m not on LinkedIn

  1. You can’t fool me. It’s just FaceBook with a tie and less kegstands.
  2. I do not want to write a recommendation for you, Mr. “We shook hands at a baseball card convention 14 years ago and now I want you to vouch for my structural engineering skills”
  3. I haven’t yet figure out how to explain those four years that I had very little employment possibilities except making license plates… I mean… doing contract work for the department of motor vehicles. Wait, that’s brilliant!
  4. It’s going to get bought out by FaceBook someday anyway, so I’m saving myself from a letdown.
  5. By not being on LinkedIn I am afforded one more way that I can reject people
  6. I’m embarrsed to list my education history as home school and a failed attempt at Community College. DMX’s Street School isn’t accredited either (and I can’t load a clip with one hand anyway).
  7. Do I really need one more place where all of my personal information is cataloged and resold to The Collective?
  8. Public statements about professional accomplishments tend to attract fact checking.
  9. When writing a recommendation, I don’t know if I can resist the urge to tell about who eschews all forms of  body odor control as well as which ones eat kipper snacks at their desk, with their hands and don’t wash up.
  10. Speaking of recommendations, I’m scared to see what former coworkers think of me and I don’t have a Google-sized legal defense fund to launch defamation lawsuits and pursue gag orders.

And before anyone asks, there will be no follow-up to this post titled “Top 10 reasons why I realy am on LinkedIn”. Why? Because I’m really not on LinkedIn! Ya rly. I made one years ago, but I quickly regained my sanity and ran away from it like it threatened to install Microsoft Bob on my laptop. I went so far as to email Linked In and have my account permanently deleted.

How about you? What do you think of LinkedIn? Has it actually helped your career? Has it hurt you in some way?

HP has decided that you, dear IT person, do not have enough social interaction with your peers. Twitter, blogs, IRC and forums? Pah! They are but mere facsimiles of true IT Social Networking greatness.

Behold: 48upper.com. What does it mean? It’s not immediately obvious to most people that it is a reference to HP’s super-secret labs in Cupertino. But hey! It sounds all trendy, kewl webernet 2.5! Sort of. And besides, any modern site worth it’s weight in blackjack gum must not, repeat, must not have a comprehensible name.

The truly puzzling thing is that the second rule of webernets 2.5 states that product names must be even catchier than pharmaceutical products (“Ask your doctor about Promaxa, Zopotol and Usuxa!”). 48Upper is as memorable as shampoo ingredients. Plus, it’s closely tied with a vendor, which is a no-no for IT people, but more on that later.

In the mean time, behold their YouTube video:

Somehow after watching that I feel like I’ve been transported back to 1996. I also feel like I’ve just watched a patronizing HR video. I also feel like doing flips!

image

In spite of my skeptical nature, I signed up for it and am supposed to receive some kind of communication from them shortly. Why the wait? Because it’s apparently not ready for prime time. The earliest mention of this community that I could find was from March of 2010. They have a FaceBook page (Good thing I’m not on FaceBook — *ahem*), a YouTube Channel (with only one video as of this writing) and a blog.

There is also a “Manifesto” concerning what the 48Upper community was intended to be about. Nothing particularly interesting exists in it except an eye-rollingingly contrived “Revolution” that is dubbed “SoCool-IT”.

The blog has four EDIT: five posts since March 2010, about one post per month, the last one being over a month ago on March 11th. EDIT: Mere hours before this post went live, a new blog post was published. Discouragingly, there are months old comment spam that has not been taken care of. However, I did learn a new joke about how to be passive-aggressive and call someone a pig even after a court order commands you not to.

Call me suspicious and confiscate my X-Files collection, but it seems fishy to me. It seems like a scheme to cull information from people and attempt to inculcate brand ideals into a targeted group of potential customers.

It also doesn’t impress me that it has so much chatter in major tech publications. Why wasn’t my blog mentioned on ZDNet and InfoWorld when it launched? Because I’m not HP, that’s why. If it’s only pundits and analysts that are talking about it, what does that say about it’s intent? None on my considerable list of independent IT bloggers have mentioned it. *Insert squinty-eyed expression of distrust here*

I suppose I’m not much of a help to the situation just by snarking all over it. So let me attempt to salvage some positivity and kindness and speak to anyone who is thinking about starting an IT community:

  1. If you’re a vendor, we won’t trust you easily if at all. We know you’re trying to make money. There’s nothing wrong with that, but we also know that you have no interest in people buying your competitor’s products even if they’re better.
  2. If you’re a vendor, don’t name the community after yourself or a product no matter how obscure it is.
  3. We don’t need flash and glam. Sure we like polish and professionalism, but don’t go buck wild on the CSS, flash and graphic design. The flashier it looks the more we suspect that you have very close ties to an advertising agency which means you want to sell something to us very badly.
  4. Don’t patronize or play the stereotype card very much if at all. Yes, we know IT people are known for being cranky, anti-social and with a penchant for passive aggression. It’s funny sometimes. XKCD and Ctrl+Alt+Del play on them and we laugh. But it gets old after a while. At the end of teh day, we’re pros and would like to be treated as such.
  5. Make it open and community driven. Don’t clamp it down or make the authority structure a black box that cannot be appealed to. ServerFault does a decent job of spreading the authority to those who are involved.
  6. Be very careful with the advertising that you allow and any subscription schemes that you implement. If it’s an ad-trap or there are vendors who are community members and are given special treatment, the community is tainted. If there are special subscriptions that give you access to more forums, it sounds less like community and more like a racket. Of course the lights need to be kept on, but be careful. There’s a reason why I’ve never wanted to join Tek-Tips or Experts-Exchange.
  7. Don’t use the word “Revolution”. Ever.

To those involved with 48Upper, thanks for thinking of us! I hope it works out for you. I’ve even signed up and will give it a try when and if it ever gets off the ground. I just hope it doesn’t turn into a HP love-fest with no community control. (This coming from a person who hearts HP DL servers and thinks ProCurve is teh pwn).

What are some good examples of online IT communities? I can share the ones that I prefer:

Did I miss any? What are your favorite online IT communities? Furthermore, what turns you off about the ones that you ignore or have left behind?

If you’ll excuse me, I have to go log into ServerFault now. I’m going to get that Fanatic badge if it kills me!

Recently one of the organizations I do work for (at least until I can get my own business off the ground) had to go through a PCI compliance check. New rules require that all organizations who handle credit cards pass these tests, not just ones that handle a certain amount of monetary transactions.

The service we use to handle credit card transactions and customer payment information contracts with a company called Security Metrics to do security scans of their customers for PCI compliance. The head of finance at the small organization I help was the primary contact with Security Metrics.

Begin Ninja edit:

That brings up another story all together about how I wasn’t told about any of this until I got a call asking “What’s our external IP?” — A question that does not portend a good ending to the day. I found out rather abruptly that PCI compliance is now required for all organizations that handle credit cards, not just those with a certain volume of transactions. I found this out after the website had a preliminary scan and just minutes before the office’s IP was submitted for scanning. O frabjous day! Callooh! Callay!

Where’s my vorpal blade? I feel like snicker-snacking someone.

End Ninja Edit.

For PCI compliance to be achieved, we need some simple security scans of our office’s external IP address as well as our website. No problem… but I still wanted to know a bit more about what was expected of us to pass muster. I went to the SecurityMetrics.com website and was greeted with this magnificent spectacle of failure (click for larger image):

A company that makes its living off of security, specifically PCI compliance, throws a certificate error when going to their site. The solution? Make sure to precede the domain with ‘www‘. www.SecurityMetrics.com is the name registered on the SSL certificate, but SecurityMetrics.com by itself was not. I made mention of it to the head of finance who said that he had to call his Security Metrics contact anyway and would mention it. The second-hand information that I received from the Security Metrics rep was that it was by design. Somehow it made the site more secure. There was some hand-waving about not letting people just put different names in to see what comes back. This way you have to specifically go to their site with full knowledge of where you’re going otherwise it will throw security errors as a sort of deterrent or block”

I will pause to let you digest that information.

No mention of this security feature is made on the Security Metrics site (at least not that I can find). I figured that it might  be listed on a  FAQ somewhere since apparently the Security Metrics rep was familiar with having to hand that answer out. I can’t even find a blip about this on the web at large. I thought maybe someone had seen this and either lauded or lambasted it. So far, it seems like I’m the only one so far to vocalize my bemusement.

I sent out some tweets to see if anyone could come up with a reason why this would be considered more secure. Two persons mentioned that using wildcard certs are a bad habit and insecure. Michael “@voretaq7″ Graziano made the observation that “Technically it is: Wildcard or host-alias certs increase the scope of a secret key breach. Individual certs are always better.” Sean “@nullstream” cody gave me this eWeek.com article as a reference. Okay, we’re on to something now.  This is the first and only coherent reason I’ve heard so far. However, the reasons stopped there.

No mention of anything about “stopping people from searching to see what comes back” or “requiring people to deliberately go to the main website” was made. Granted, I’m going off of second-hand information that I was being given from the finance manager who asked the Security Metrics rep, but I’m not sure that a two node game of telephone can mutate “It’s a more secure certificate so people can’t crack it” into something like “It doesn’t let people search for other areas of our site.”

Furthermore, all http requests to the site are redirected to a https connection. Why not mod_rewrite all SecurityMetrics.com requests to www.SecurityMetrics.com? Why not mod_rewrite all subdomain requests to the ‘www’? Something smells fishy. I see a lot of hand-waving and pointy hair. Most people seemed to believe, as do I , that someone made a mistake or was low on funds and only created a cert to correspond with the www name.

However, I am not a security expert by any means. Almost all of what I know about SSL certificates are through dealing with Microsoft Exchange’s RPC over HTTPS and OWA. I’m always open to be taught a lesson. Either this is a highly specialized way of achieving extra security or it’s a rather amusing failure and subsequent excuse on someone’s part. At the very least I think a rewrite rule is in order. What say you?

Okay. I lied. I really do have a FaceBook account. To save some face (pun somewhat intended), only three people on earth know about it and are friends thereon. They have all been sworn to contractual secrecy on penalty of having to close their FaceBook account and start a MySpace page.

Here are the top 10 reasons why I really am really on FaceBook:

  1. A girl that I liked asked if I had a FaceBook account hours before she boarded a plane for her home country. I then scrambled to make one when I got home. In spite of that, we never really talked again. FailBook.
  2. Just in case another girl I like asks if I have a FaceBook account.
  3. To find out as quickly as possible if the girl has any connection to Twilight, likes cats or belongs to the “I Get Violent Thoughts When I See Someone Litter” group. That way I can terminate the relationship before too much of my stuff ends up in her apartment.
  4. To keep track of people that actually go out into the Big Room and interact with analog versions of people. This knowledge might come in handy if Amazon ever stops delivering Dinty Moore and Cheetos and I am forced to leave my own little middle-earth.
  5. Because I’m sadistic and enjoy self-loathing and shame.
  6. Because I’ve already murdered any hopes of resisting The Collective by having a Google account. Now when the two entities declare war and unleash their alien hordes upon each other, they can fight over me and maybe I’ll get a Spartan Laser Rifle out of the deal.
  7. I would have been socially underdeveloped if I hadn’t seen that one friend eject four organs after a failed attempt at a keg stand.
  8. I want to keep watch on old college friends to make sure they’re not posting photographic evidence of anything we did until the statute of limitations has passed.
  9. I really do want to know which of my family members belongs to the “I Tend to Fart in Public” group as per my previous post concerning FaceBook. That way I can feel superior at the next family picnic while they make me sit at the children’s table just because they don’t trust me with the metal utensils.
  10. I was voted “Most Likely to… wait, who is this kid again?” in school and FaceBook gives me just enough of a hollow self affirmation to keep me out of therapy.

Maybe some of those reasons were a bit facetious. Numbers 1 and 2 are very real reasons though. Maybe number 8 too.

Maybe.

What are your excuses?

I like big uptime numbers. I was unnaturally fascinated by the Cool Solutions sponsored “NetWare Server Uptime Contest”. Over six years without a reboot? Be still my heart.

uptime.FAIL

Recently, this obsession of mine with seeing a server’s uptime being measured in years and not days was challenged. It all happened when I realized I needed to peruse through the available updates for a Windows SBS 2008 machine that I am responsible for. It was something that I had been putting off for too long. While attempting to conceal my rapturous joy at having to perform that task (and succeeding quite well), I realized I was not looking forward to the reboot(s) that were imminent.

I hate rebooting servers. Nothing good ever comes of it. Maybe that’s a bit pessimistic, but the fear I think is justified. So many complex services starting up simultaneously, some of them having been recently patched and most of them ordered in a hierarchical dependency chain, puts an unnecessary strain on my hairline.

Thinking about it some more, I realized that while my love of uptime was partially driven by a childish fascination with extremes, it was more a reaction to my fear of rebooting. Upon even further introspection, I saw that my fear of rebooting wasn’t so much a fear of rebooting as it was a fear of problems. SysAdmins hate problems.

However, in reality the reaction to my fears was causing my fears to actualize! Think deeply on that. There are greater applications of that thought than to the simple act of rebooting servers. Selah.

I turned to my Twitter companions for a quick opinion pole (I’m quickly recognizing Twitter as a great resource if used properly). I asked my followers if epic uptime was worthy of high-fives or if it was a sign of an unpatched server. The responses were unanimous:

Michael “@errr_” Rice: @Nonapeptide Epic Uptime == unpatched server . #sysadmin #epenis

Benjamin “@blueben” Krueger: @Nonapeptide It doesn’t take much more than dumb luck to keep a quiet back-room server up for a long time. We shouldn’t reward bad behavior.

Jonathan “@j_angliss” Angliss: @Nonapeptide depends on the platform. Usually means unpatched kernels etc, but there is ksplice now which aids it.

Jonathan “@j_angliss” Angliss: @Nonapeptide that being said, lack of regular reboots leads to other issues. #lopsa tech mailing list discussed this http://bit.ly/aTJlnS

Jonathan “@j_angliss” Angliss: @Nonapeptide generally speaking, reboots usually only apply for kernel type stuffs, windows is worse due to dll hell, and running services

Jason “@obfuscurity” Dixon: @Nonapeptide unpatched server == stupid + irresponsible + lazy

@jtimberman: @Nonapeptide Unpatched server. Service availability doesn’t require a single system to be up for ages. #SysAdmin

@dancarley: Sign of unmaintained machines + insufficient infra. Should always know what state a machine will be in after reboot.

Wow, it looks like I showed up to this thought party unfashionably late. My obsession with uptime spawned from a mild fascination with big numbers and a major allergic reaction to problems apparently needed to die.

What was most important for me to realize was that a schedule of controlled reboots will increase system stability and decrease the likelihood of a server not coming back up. It makes sense in retrospect, but I suppose I was frequently on the treadmill of reactionary administering and hadn’t paused to assess my assumptions.

And yet, I still like to have a quantifiable measure of success. To me, uptime meant success. If something had been running for three years, it meant that there was no problems with it (false reasoning, I know). So I posed another question to the Twitterverse. If server uptime is a poor metric, what do I use to measure a thing’s success?

@jtimberman: @Nonapeptide “Availability”. The infamous number of nines. #SysAdmin

@mibus:@Nonapeptide Service Uptime, not Server Uptime. Load-balance, cluster, whatever – users care about the Service, not the Server.

Jason “@obfuscurity” Dixon: @Nonapeptide It’s more than just service health or uptime. Don’t take the business effects for granted. Is the service doing it’s *JOB*?

Jason “@obfuscurity” Dixon: @Nonapeptide But seriously, enough of this “uptime” nonsense. I’ve said it before, there has to be PURPOSE to your monitoring. Correlate!

It makes perfect sense. Users don’t care if a server has been up for three years. If the thing is slow, has SMB authentication issues or is otherwise unhelpful, then the thing is a failure regardless of how infrequently it locks or otherwise requires a reboot. Success should be measured in a way that is abstracted away from the base hardware and OS that the service is running on.

Has the DFS cluster been able to service user requests at all times for the last three years even through monthly patching, reboots, OS upgrades and network infrastructure changes? It has? Wow. That, my friends, is success.

When you view success from service availability rather than individual systems’ uptime, you begin to realize that a service is dependent upon more than just it’s binaries or a single switch in a stack or whatever it is that you happen to be monitoring. Any service is reliant on its OS, which is reliant on its hardware, which is reliant on the network, which is reliant on the power, which is… you get the idea.

With that understanding of service availability, you can easily see what are the most important parts of your infrastructure and what ought to be monitored and how.

To finish up, I would recommend that everyone go and read the LOPSA thred that j_angliss referenced: http://lopsa.org/pipermail/tech/2010-April/thread.html#4324

If you read and think on the whole thing you’ve just earned a Bachelors of Awesomeness in Systems Administration. Here is just one of the great thoughts:

“And who knows what config changes have been made that will cause the machine or some service to fail to come up in the event of an unexpected/unattended reboot. I am seriously considering adding a nagios to check for every machine in our environment to issue a warning when a machine has been up for more than 3 months. If it hasn’t been rebooted in 3 months it seems less likely to come up properly or be up to date on patches.”Tracy Reed

“We came to the same conclusion at $WORK, it also helped highlight machines that were either single points of failure or that people were just flat out scared about.”Dean Wilson

Wow, Dean’s realization is true. Servers or appliances that haven’t been rebooted since men wore cummerbunds and women swooned is probably a sign of greater problems than unpatched services.

However, the idea of uptime being bad is not without intelligent opposition. For example, take this quote from the LOPSA thread:

“Having a high uptime does not necessarily mean that there have been no security updates, since you can update almost everything without a reboot.

Granted a reboot is required to update the kernel itself, but if your server is decently hardened and firewalled, exactly which kernel exploits are you vulnerable to?

I had a server that was online for over 1300 days, until it was rebooted by datacenter power issues. Since it rebooted anyway, I took the opportunity to install the only package that was not up to current, the linux-kernel. Did I suddenly feel safer? Not really :)” – Charles R Jones

It should be noted that even that argument had it’s share of detractors. Just read the thread a few times and come to your own conclusions. And when you do, post your thoughts here. This topic isn’t as easy as a “Yes it is!” or “No it isn’t!” decision.

So that’s it! You’ve just been witness to the death of a phobia and the birth of a much healthier and more logical outlook for this SysAdmin. I’m getting less nubby each day… thanks to the Twitterverse and some people smarter than me willing to share their experience.

How about you? Do you reboot servers on a schedule or do you dread reboots like papercuts on your eyeballs? Do you measure success with system uptime or with service uptime? As a bonus question, one that I wished I had more time to delve into, do modern-day Windows systems need more reboots for more patches than *nix machines?

Gotta go reboot that server now…

(I’m bringing over some of the better posts from my old blog. This one has recently been updated for grammar, formatting and the inclusion of the services from DNSMadeEasy.com)

I’m working on setting up an email server on my home network, however my ISP blocks port 25 inbound (SMTP). Fortunately, the port blocking is the only restriction and they do not seem to have a problem with me hosting my own mail server.

I need some kind of SMTP redirection service to point my MX record to which will accept mail on port 25 but then send the received mail to my own mail server over a non-standard port.

My server itself is more of a testing ground than a production mail server and thus won’t have heavy email traffic. Probably only a few dozen messages a day at the most. I started searching for services that were as cheap as possible.

I’ve compiled a list of Inbound SMTP Redirection services here. If anyone knows of more, please let me know and I’ll list them:

  • (Added July 2013) GhettoSMTP. Free SMTP redirection. Up to 4,999 emails per month. $5 per month for 5,000 or more emails. GhettoSMTP is the free service portion of UptownSMTP. Disclaimer: This is a service that I created with my consulting company.
  • No-IP.com’s “Mail Reflector” for $39.95 a year
  • DynDNS.com’s “MailHop Relay” for $49.95 a year
  • DNSExit.com’s Mail Redirection Service for as low as $19.98 per year
  • DNSMadeEasy.com’s Mail Server Forwarding $18.95 per year for one domain
  • (Added Jan 2011) MX GuardDog: They offer free inbound spam and antivirus cleansing for your email as long as you put a link to their site on a site of yours. In essence it’s a link exchange. It’s hidden in their FAQ, but they will redirect mail down on an alternate port for you. Personally, I’m a bit suspicious. It seems too good to be true. Perhaps they’re analyzing the spam and viruses for other purposes which make it worth their while. Oct 2011 Update: Check out the comments below, specifically the commentor seemebreakthis and his negative experience with MX GuardDog.

IMO, this entire service sector is overpriced and awaits some good competition to drop it to more reasonable rates. On top of my displeasure with the general price structure of this service, I’m broke and don’t want to pay any money at all.

I started looking for free services. Seems like an impossible thing to expect, right? Almost… but not quite. I found two services that were willing to give free limited accounts out.

rereoutmail.com – This site advertises a free account that will forward all of your emails over a nonstandard port, never delete mails and hold mail if your server is offline. However, there are limitations on the number and size of emails per day that you can receive.

Confusingly, the limits on the free account are listed on the home page, but conflicts with the limits listed on the accounts page. The home page says that you can receive 50 emails or 50 MBs per day, with a 1 hour delay for each email over that limit. The accounts page shows a limit of 10 immediately delivered emails per month (!) with a one hour delay per email over that limit.

I would have signed up and trialed the service, however the registration page says that they are currently in a private beta. They allow you to sign up for notification when the public beta goes live. I encourage everyone that reads this to sign up in the hopes that it may encourage the creators to complete the project. I have no idea when that site was created. There’s a conspicuous lack of a date anywhere, which makes me slightly suspicious.

I should also note that the pricing for the paid accounts doesn’t seem to be competitive especially when you realize that the prices are in Euros which are valued higher than many of the world’s dollars (Canadian, USA, Australian).

RollerNetwork.us [Update: Roller Network has changed their account features and this is no longer applicable] RollerNetwork offers a free redirection service that is not time limited. However, they explicitly state that they reserve the right to limit, reconfigure, discontinue and otherwise kick any and all free accounts to the curb at Roller Network’s discretion.

The free account offers secondary MX records, SMTP redirection and secondary DNS. The limit on messages seems generous: 200 messages or 10MB can be relayed per “cycle”. I still can’t understand exactly what a “cycle” is even after reading the definition on the site. I’ll reproduce it here and maybe someone can enlighten me:

A cycle is currently defined as the previous week’s worth of mail traffic (7 calendar days) with a 72-hour resting period after any of the limits are exceeded. During this rest period, mail domains are deactivated and will refuse new messages. After the rest preiod [sic] has expired, the mail domains are reactivated. However, if after a 72-hour rest the 7 day total still exceeds the limits, another 72-hour rest will be applied.

I think that means the message count is reset every 7 days plus 72 hours for every time you go over the limit until you’re out of the 7 day cycle and into the next one.

A commentor named “Sam Allen” on my old blog had this to say about the scheme:

From reading the description, I’d suggest that ‘cycle’ means a combined total of the previous 7 days at any given time. The total wouldn’t be ‘reset’, merely recalculated every day to drop the ‘last’ day and add the most recent one.

The 72 hr thing means that if your total in the previous 7 days goes over the limit, your account is effectively closed for 3 days. If after those three days, the previous 7 day total (including 3 down days and 4 ‘up’ days) is STILL over the limit, you get another 3 days down.

Either way, it would be helpful to have examples or diagrams drawn out. Then again, if an account structure tempts me to pull out a UML tool to help figure it out, maybe it’s time to rethink the account structure.

Nonetheless, that will fit my test requirements. Thanks Roller Network! They also have some decent looking for-pay services that seem to compete nicely with similar services. Oh, and their IPv6 support roxors!

Do you have any experience with inbound SMTP redirection services? I’d love to hear your experiences and suggestions.

Follow TheNubbyAdmin!

follow us in feedly

Raw RSS Feed:

Contact Me!

Want to hire me as a consultant? Have a job you think I might be interested in? Drop me a line:

Contact Me!

Subscribe via Email

Your email address is handled by Google FeedBurner and never spammed!

The Nubby Archives

Subscribe To Me on YouTube

Circle Me on Google+!

Photos from Flickr

Me on StackExchange: