(This is an old post migrated from my Blogspot blog of years ago. I was speaking to some colleagues about “Best Practices” and how they’re usually a cover for the lazy or at best a baseline of mediocrity. While I collect thoughts for a more in-depth post, I’ll leave this old one here.)

My mind is lazy. It’s a problem that I’m aware of and am working against as hard as I know how. There are two main symptoms of my mental laziness. The first is that I want to remain as comfortable as possible by staying with what I know. The second is that I just don’t want to spend the time it would take to learn something new.

As a result, when I want to do something a certain way I skew my views, bias the facts, ignore fallacies and generally rage against the scientific method in my pursuit of all things Me. I’m heinously selfish.

While reading through the blog of Virtualization maven Jason Boche, I came across a convicting jewel of a statement within this post that detailed his in-person defense of a Virtualization design while pursuing his VCDX certification. I was happily reading through the longish post when out of nowhere, in the eleventh paragraph, Rational Thought punched me in the face thusly:

Anything you list in your design you need to be able to speak to. If you cannot speak to everything in your design, then how do you know it is appropriate for your design? “Because”, and “Best Practice” are not complete answers.

Most of you, as seasoned IT professionals, will probably be staring at the screen and mumbling something like “Well duh!” Go back and read the first sentence in this blog post. Do you see why Jason Bosche’s sentence was so devastating to me? I stopped and re-read his two sentence dose of reality several times.

When I’m looking to create a solution, I heavily favor what I’m most comfortable with by using words like “Industry Standard”, “Best Practice” and “Standards Based” to soothe the pangs of protest from my conscience. In reality, I can sense that I’m probably only pursing The Path of Familiarity or avoiding The Path of Lots of Reading.

Don’t get me wrong. I really enjoy learning new things. That’s one of my favorite things about this industry. But sometimes in spite of enjoying this field of work, and this is the confusing part that I haven’t been able to sort out yet, I just don’t want to go through the effort of learning a new thing.

Other times, I make a design decision and know that I don’t really know why I’ve chosen a certain element. I’ll glance at it and think “Eh, it’s a standard.” But shouldn’t I be able to defend that decision on a more granular level?

I admit athat you can’t know everyhing about any industry, especially info tech. I can’t break every link in in each chain that makes up a solution down to an atomic level because if I did, I’d be reading books on particle theory and leaving out saucers of cream for Schrodinger’s cat.

And furthermore, I firmly believe that there is often no single definitive “best solution”. The decision on what is the best solution is legitimately influenced by what you’re most familiar with. More on that in a later post.

Can you defend your design decisions? Can you speak to each element with something more thoughtful than sales department buzzwords like “Best Practice” and “Industry Standard”? Are your decisions based on more than familiarity and comfort? Here’s to admins that embrace the scientific method.

Free stuff rocks. As the Open Source business model has continued to grow in popularity and financial success, more and more companies are starting to offer a lite version of their flagship product for free. Others still offer time-crippled versions of their full product that only work for a short amount of time. However, the principle remains the same: To make an informed decision, people need to try out your product before they can become happy customers. It’s no use to snag a paying customer who becomes an unhappy customer when they realize that the product does not perform well in their scenario.

However, there’s a second half to the philosophy of giving things away for free. Free means more than money. Free should also mean bereft of intimidation, expectation and especially obligation. I shouldn’t have to worry that every phone call that rings my phone will probably be a Sales Don with a blackbelt in the bloodthirsty discipline of aye-fleece-yu. I shouldn’t have to peek out my curtains before walking to the mailbox. Once I start practicing SERE techniques before going to work, I think that’s a sign that a line has been crossed.

There’s some places that only want minimal information. A name, and email address. I’m okay with that in the same way that a person is happy if a mugger only wants their paper money and not their whole wallet, backpack and gold tooth. Other places are not so lenient. Each field is required in a multi-field form. First name, last name, salutation, business title, company name, number of employees, address (some even check for address validity so 1313 Mockingbird Ln won’t work. Curses!), cell phone, business phone (with extension), yearly budget, number of servers, number of PCs, number of mobile devices, number of household pets, yearly salary, highest level of education and yes, in some cases even a credit card number.

Did someone really think that demanding all that information would be a good idea? Furthermore, are there people out there that think giving this amount of information (accurately, anyway) is a good idea? I’ve had to do a bit of my own research on eCommerce and conversion rate theory, so I know quite a bit about the need to reduce the hurtles in a form to thereby increase submissions. I find it very hard to believe that a detailed cross-examination is anything but a barrier to conversions. Perhaps some would say that a certain level of detail will weed out the merely curious from the more serious leads.

Here are some of the ways that I translate a “sign up for a free trial!” barrier:

“Give us all your deatils so we can air drop a sales ninja to your CTO or contracting employer!”

Because the merits of your product require a little extra help from shiny brochures, buzzspeak and toothsome sales associates.

“Our business can only stand on shady credit card scams so we’ll conveniently charge you some hidden fees or opt-in charges for the skrill we so desperately need”

Just don’t.

“We will stalk you and everyone who works at your company using LinkedIn, FaceBook and high school class reunions until we can find a C-level exec to drug and brainwash with our scurrilous ways.”

Because if the people who actually know technology can’t be won over, that’s just a small bump on the road to Salesville.

What do you think when there is a barrier to getting a trial software product? What about when you’re required to fill out a ton of information first? Do you have any sales horror stories surrounding what should have been a very simple product trial? Share in the comments below.

Previously, I explored how to view all the users that are currently logged into my Linux server. A natural extension to that desire is to see all users who have logged into the server in the past. While current users are kept track of with the utmp file, past logins and logouts are kept track of in the wtmp/wtmpx file.

One way is to use the `last` command. My regular work laptop’s `last` output is rather boring:

wesley pts/7 :0.0 Wed Feb 15 19:57 - 20:42 (00:44)
wesley pts/6 :0.0 Tue Feb 14 20:53 - 21:18 (00:25)
wesley pts/5 :0.0 Tue Feb 14 20:46 still logged in
wesley pts/4 :0.0 Tue Feb 14 17:02 - 20:46 (03:43)
wesley pts/3 :0.0 Tue Feb 14 16:34 still logged in
wesley pts/2 :0.0 Tue Feb 14 16:25 - 16:26 (00:01)
wesley pts/1 :0.0 Tue Feb 14 16:24 still logged in
wesley tty1 :0 Tue Feb 14 12:28 still logged in
reboot system boot 2.6.35.14-97.fc1 Tue Feb 14 12:27 - 22:41 (1+10:14)
wesley tty1 :0 Tue Feb 14 09:19 - down (00:58)

If there is a specific user that you’d like to hone in on, use last [username] thusly:

# last root
root     pts/0        [ip removed]. Tue Feb 14 18:22   still logged in
root     pts/0        [ip removed]. Sun Feb 12 00:42 - 01:50  (01:07)
root     pts/0        [ip removed]. Sat Feb 11 16:24 - 19:41  (03:17)
root     pts/0        [ip removed]. Sat Feb 11 16:21 - 16:23  (00:02)

A useful switch when trying to hone in on remote logins is the -a switch which appends hostnames to the end of the table. -d will do a reverse lookup on remote IP addresses as well. A useful way to use this would be to see from which IP addresses and hosts a certain user account accesses your server. In my case, I know that only two people should theoretically have access to a certain FTP address. If I see that user account logging in from IP blocks in Namibia, I should probably be worred.

Another place to look for past logins is in /var/log/secure log files. They will also show failed login attempts. You could perform the following to find certain strings that show whatever events you’re interested in:

cat secure* | grep Accepted

However you will be in peril of winning a “Useless Use of Cat Award“.

A similar but different command is `lastlog` that by default prints out each user account that is on your machine along with the the account’s last login time.

someuser@someserver [/]# lastlog
Username Port From Latest
someuser pts/0 [ip removed]. Tue Feb 14 18:22:32 -0500 2012
bin **Never logged in**
daemon **Never logged in**
adm **Never logged in**
lp **Never logged in**

Lastlog itself merely scrys into /var/log/lastlog. You can modify the date from which it looks back to see when the last login occurred.

As a bonus, try `lastb` to see all the failed login attempts on your machine. Prepare to weep.

How do you figure out who was logged into your server and when? What better tools do you know of? I know none of the above are truly audit-level methods. Let me know in the comments below.

While working on a Linux machine, you will very likely have a “What the heck just happened and who the heck just did it?” moment. This is when you’ll want to quickly see who’s currently logged in.

Before you go any further, you should acquaint yourself with the concept of a utmp file (possibly also known as the utmpx file). A utmp file keeps track of currently logged on users and is what any command will ultimately reference to bring you the desired information.

Firstly, you can try the `users` command. However, the information garnered is pretty sparing. It’s merely a username repeated as many times as there is a login session for it. In my case, on my laptop at the very moment I write this, I see this:

[wesley@Fedora1530 ~]$ users
wesley wesley wesley wesley

Two other tools that will give you vastly more information are `w` and `who`. Running `w` on the same laptop and sesson as I did `users` above, I get this output:

USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
wesley tty1 :0 Tue12 39:54m 1:34m 0.06s pam: gdm-password
wesley pts/1 :0.0 Tue16 1:38m 0.45s 0.44s ssh user@remoteserver1.com
wesley pts/3 :0.0 Tue16 1:38m 0.49s 0.47s ssh user@remoteserver2.com
wesley pts/5 :0.0 Tue20 0.00s 0.25s 0.03s w

That is considerably more information. The default output of `who` looks like this:

wesley   tty1         2012-02-14 12:28 (:0)
wesley   pts/1        2012-02-14 16:24 (:0.0)
wesley   pts/3        2012-02-14 16:34 (:0.0)
wesley   pts/5        2012-02-14 20:46 (:0.0)

`Who` defaults to simply showing the Name, Line, Time and Comment columns (at least my version on Fedora 14) however many other bits of information can also be added. Check the appropriate man pages.

Once you know who is logged in, whether or not you then harass them with ‘write‘ or pkill everyone who isn’t you is completely up to your discretion.

How do you like to figure out who is logged into your machine? Any pro tips?

A while back I wrote an article for Simple Talk concerning one way to reset a Windows password on a machine that you have physical access to. I decided to make an accompanying screencast to show it in action. Below are two identical videos. One on YouTube, the other on Vimeo. Choose the video site that you prefer the best.

Note that the YouTube video is rendered in 720p so crank up the quality and watch in full screen. For some reason the Vimeo video isn’t in HD even though I have one HD upload per month.

Vimeo


YouTube


Etc. Notes

Let me know if you spot any glaring inconsistencies. The presentation portion of the screencast was made with Prezi. I used the Windows version of Camtasia Studio 7 to make the screencast.

Do you have any topics that you’d like to see explained in a screencast? Let me know in the comments below.

(P.S. Yes, I realize now that I start sentences with the words “so” and “now” far too much. I’ll work on fixing that in the next screencast I do.)

I recently had a moment involving a CentOS server that caused me to circle the wagons and ask “Who just did what in their shell?!”

After quickly checking to see who was currently logged in (as well as those that had just recently been logged in), I wanted to see the command history for each user on the server.

Before I go any further, let me say a few important things:

There are more shells than bash

Each shell has its own history options and files. Don’t assume that because you found all the .bash_history files on a machine that you have all shell histories.

And all the zsh proselytes said “Amen.”

.bash_history is a suggestion not a rule

Bash’s history file (that’s the $HISTFILE variable) can be changed. Just because you found all the .bash_history files on a machine doesn’t mean you have all of bash’s history.

Bash history is a convenience not a reporting tool

Bash history can easily be altered for both good and bad purposes. It is not to be relied on as a a way of seriously auditing what has been done on a server. For that kind of thing, look at auditd.

Scan all .bash_history files

The above notwithstanding, if you want to quickly scan your machine’s .bash_history files consider the following options.

The first is dead simple, and I thank @etrever, @evilchili and Gilles over at unix.StackExchange.com for this method (I’m still getting my *nix chops).

grep -e "stuff goes here" /home/*/.bash_history

Yep, simple as that. This is of course assuming that 1) All user folders are standardized, and 2) your history files all share a common name. If the previous two things are true, this is a great, quick way to see things like… oh… say… Who just went all chmod -R 777 on the httpdocs folder?!

However, if you want a slightly more robust way of searching through all bash history on a machine that takes the home folder ambiguity out of the equation, Gilles from the Unix & Linux Stack Exchange had an awesome solution.

getent passwd |
cut -d : -f 6 |
sed 's:$:/.bash_history:' |
xargs -d 'n' grep -H -e "$pattern"

I had never seen the ‘getent’ tool before which gets entries from the following administrative databases: ahosts, ahostsv4, ahostsv6, aliases, ethers, group, gshadow, hosts, netgroup, networks, passwd, protocols, rpc, services, and shadow. ‘Cut’ segments the input by a colon and then selects the sixth field which is each user’s home directory. Sed works its magic to take the input and append it with the probable location of the .bash_history file. Finally grep is fed each path and searches for our pattern.

Certainly, if there is a question about the existence of other shells or if you want to be certain that your history file really is called .bash_history, you’ll need to add some extra logic in. However, for my scenario, this was enough to get me going.

Unfortunately, I was made painfully aware of how bash history is a mere user level convenience and not an auditing tool. Nothing malicious was done to the server and nothing terribly bad was done, however as I looked deeper into what could have happened, I realized that a much more thorough auditing trail might be needed in the future.

How do you handle shell history? Do you implement any special tricks to make it more reliable or do you use an entirely different system to keep track of commands that have been run?

My Problem

While performing the migration of my WordPress 3.3.1 blog, I used the “export” and “import” features to move my content. Upon trying to import the .xml file into my new WordPress installation that the export feature on the old installation had created, I hit upon this error:

Sorry, there has been an error.This does not appear to be a WXR file, missing/invalid WXR version number

My Solution

Downgrade to WordPress 3.2.x, perform the importation and then upgrade to the latest version.

Downgrading can take two forms: simply reinstalling WordPress from the ground up using an older version or taking the files of an older version and overwriting all existing 3.3.1 files with the exception of the wp-config.php file. Do not overwrite the wp-config file of the WordPress installation that you are trying to import into.

To find old versions of WordPress, visit the WordPress Release Archive. Make sure not to download one of the beta or release candidate files. Also, be aware if you’re site uses the MU version of WordPress. The files for MU installations are separate from the non-MU files.

Other Solutions

There are other possibilities as to why you cannot import your XML file.

The first is to look in the XML file and, near the top, add the line “<wp:wxr_version>1.1</wp:wxr_version>” (without quotes) just after the language definition declaration. For more information, see this WordPress Support thread.

Another possibility is that PHP safe_mode might be turned on and causing problems. safe_mode being on does not in itself guarantee that it is the cause of this problem, but it could be. You will need to contact your web host and ask if safe_mode is turned on. It is common for shared web hosts to enable it.

When looking at a list of filesystem objects, I have trouble visually parsing rwxr-xr-x or similar permissions. It’s probably something with my eyes, but I’d much prefer to see 755. More than just a visual preference, somehow I just “get” it faster than seeing letters and dashes. Surely there must be some simple switch in ls that will do this, right?

Wrong.

However, a quick-n-dirty way of doing this is with the “stat” command using the -c switch. Stat itself will show you file or filesystem status information. The -c switch allows you to customise the output. To see file permissions in octal use the “%a” format sequence. I toss in a few other format sequences for my tastes:

stat -c "%n %a %G %g" IMG_0346.MOV
IMG_0346.MOV 664 wesley 500

The file’s name is shown as a result of %n, %a shows octal permissions, %G shows the owner’s group name and %g shows the owner’s group ID.

To see the octal permissions of the contents of an entire directory (in this case my Downloads directory) simply use a star thusly:

stat -c "%a %n" Downloads/*
 
664 Downloads/localhost.sql
664 Downloads/premium-pixels-fancy-pants-blog-magazine-theme.zip
755 Downloads/premium-pixels-package
644 Downloads/readme.html
664 Downloads/RobDuck1.JPG
664 Downloads/RobDuck2.JPG
664 Downloads/socialite-modern-wordpress-theme.zip

This isn’t my ideal, however. I’d really like ls to have the option. Perhaps there’s some bastardized and recompiled ls out there. Have you ever wanted to see octal permissions on your filesystem lists? How did you go about achieving that goal?

My Problem

Remote desktop connections to a Windows Server 2008 R2 Enterprise server were absurdly slow. Refresh times were as high as ten seconds. No amount of lowering the connection settings on the remote desktop connection would increase the speed. This problem occurred from Windows Vista and 7 clients connecting to the Windows Server 2008 machine. It did not happen when connecting via RDP from Linux machines.

My Solution

On the Windows Server 2008 machine, navigate to the following registry key:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters

Add a new DWORD and give it the name DisableTaskOffload. Set the value on the new DWORD to 1.

More Information

Many articles on the web about slow RDP speeds will focus on Receieve Side Scaling (RSS) and Autotuning. I tried turning both of those off using the following method from an elevated command prompt:

netsh interface tcp set global autotuning=disabled
netsh interface tcp set global autotuninglevel=normal

That did not help matters any. For more information on Windows network offloading, see this old article from 2001. Here are some other references to disabling task offloading that might be of interest

Follow TheNubbyAdmin!

follow us in feedly

Raw RSS Feed:

Contact Me!

Want to hire me as a consultant? Have a job you think I might be interested in? Drop me a line:

Contact Me!

Subscribe via Email

Your email address is handled by Google FeedBurner and never spammed!

The Nubby Archives

Subscribe To Me on YouTube

Circle Me on Google+!

Photos from Flickr

Me on StackExchange: