One of the perennial problems I see no matter where I work or who I contract for is mysteriously slow network speeds to or from a Windows machine. I’ve amassed quite a list of tips and tricks for addressing this issue, and now I’m listing them all here.

I won’t go into a thorough treatment of exactly what’s going on with each command and feature that is being enabled or disabled. I’ll leave that as an exercise for the reader. This is a quick-n-dirty jumping off point for deeper problem resolution procedures. Also, these troubleshooting steps are not offered in any particular order, with the exception of the first four which try and scope the problem down to hardware versus software.

With no further rambling, here is my list of tricks when trying to solve a slow network connection on a Windows host:

Start With the Physical Layer

It’s almost always the easiest thing to check and is more often the cause of problems than most people would suspect. Thanks to Pauska in the comment below for reminding me of this. Switch cables out, switch NICs if possible (I like to keep a USB NIC around for this), try different switch ports, wall jacks — everything. It’s quick work and can reap a quick reward. Plus, with the physical layer out of the way, you can trust the observations that you make in the software layer.

Boot From a Live CD

Remove the OS from the equation and see if you can isolate the issue to hardware. Grab a Live CD that has an OS on it with support for your hardware. Once you boot from it, perform some tests on the bandwidth to see if the problem still exists. If so, then you may be safer in assuming that the problem exists somewhere other than the operating system (unless the same configuration that’s causing the problem exists in both operating systems).

Search for Network Related Errors

Perhaps there’s a lot of collisions on the network or the network card is having a large amount of CRC errors. A quick way to see current TCP/IP statistics is to run netstat -s. Look for any interesting numbers that speak to receive errors or re-transmissions.

Use Performance Monitor counters to analyze error data live. If errors and re-transmissions seem unusually high, you have a jumping off point for further exploration.

Inspect Traffic with Network Monitor

Launch Microsoft Network Monitor or Wireshark (or whatever packet sniffer you prefer) and inspect the packet stream. There will almost certainly be a trail of information that can lead you to the ultimate problem. The trouble is: can you persevere to the end? It’s no easy thing to digest TCP conversations en mass.

In reality, this is where the root cause analysis will begin and often where it will end. However, if you want to flail at some network related options to try and narrow down the culprit, read on.

Disable Windows Network Task Offloading

Add a DWORD registry key titled ‘DisableTaskOffload’ with value of 1 to the registry hive  HKLMSYSTEMCurrentControlSetServicesTcpipParameters.

Check that it currently exists and what the value is with the following PowerShell cmdlet:

get-itemproperty -path HKLM:SYSTEMCurrentControlSetServicesTcpipParameters -name "DisableTaskOffload"

Check the whole parent hive if you want:

get-itemproperty -path HKLM:SYSTEMCurrentControlSetServicesTcpipParameters

Create the new registry entry:

New-ItemProperty "HKLM:SYSTEMCurrentControlSetServicesTcpipParameters" -Name "DisableTaskOffload" -Value 1 -PropertyType "DWord"

Disable TCP chimney offloading

You will need to disable TCP offloading in the Windows OS as well as the hardware’s drivers, however we’ll talk more about disabling hardware offloading in the next point. By the way, TCP offloading only works if it is enabled both in Windows and in the hardware’s driver.

First, let’s check to see if any connections are currently offloaded to hardware using netstat -t

InHost means that the TCP connection is being handled… well… in the host. If there are connections being offloaded to the hardware, know that disabling this will wreak some havoc with them.

To determine the state of offloading within the OS, run the following at a command prompt:

netsh int tcp show global

Look at the state of the “Chimney Offload State” setting. If it’s enabled, disable it with the following command:

netsh int tcp set global chimney=disabled

Disable All Hardware Network Offloading

Now you need to inspect your network card’s capabilities. Go to Device Manager, open up the properties of the NIC and select the Advanced tab. Search for any options that reference offloading. TCP, UDP, checksum, whatever. Disable it. “But! But! Offloading roxors!!” I know, this is just for troubleshooting purposes. Once you figure out where the bottleneck is, you can start determining the root cause. That’s for later though.

Each card has different features and terminology, so I can’t be more specific. For now, just disable anything to do with offloading.

Disable Receive Side Scaling

Check to see if it’s enabled with the following command:

netsh int tcp show global

Disable receive side scaling with:

netsh int tcp set global rss=disabled

Disable NetDMA

Once again, check to see if it’s enabled with the following command:

netsh int tcp show global.

See if the registry key for the setting exists using PowerShell:

get-itemproperty -path HKLM:SYSTEMCurrentControlSetServicesTcpipParameters -name EnableTCPA

To disable it, create its registry key and give it the proper value. Using PowerShell:

New-Item -Path HKLM:SYSTEMCurrentControlSetServicesTcpipParametersEnableTCPA
New-ItemProperty "HKLM:SYSTEMCurrentControlSetServicesTcpipParameters" -Name "EnableTCPA" -Value 0 -PropertyType "DWord"

Disable Autotuning

Check to see if autotuning is enabled with:

netsh interface tcp show global

Disable it with:

netsh int tcp set global autotuning=disabled

Uninstall Remote Differential Compression

Go to Add/Remove Programs or Programs and Features (run >> appwiz.cpl). Choose the option to turn Windows features on or off. Uninstall Remote Differential Compression.

More information about RDC can be found at the Wikipedia page on Remote Differential Compression.

Alter NIC and Switch Port Speed and Duplex Settings

First, document your NICs current link speed and duplex settings. Then document the switch port’s settings.

In Windows, go into Device Manager, open the NIC in question and go to the advanced tab. The exact naming of the property for the card’s speed and duplex settings will vary, but you’ll know it when you see it.

Auto-negotation can be a pain. Set your NIC to 100 or 1000 Mbps Full Duplex if possible. Continue to frob with the possibilities. Personally, I wouldn’t bother with half-duplex settings, but – as they say - any port speed in a storm!

Update your NIC Drivers

Sounds simple. Sounds stupid. It works. Do it.

Not only should you use the latest drivers, but also look for discussions concerning your network card and its performance relative to the driver version. Perhaps it’s an older driver that you need. See if you can track down older versions and try those.

Check for Third Party Security Tools

If an antivirus utility is set to scan live traffic for malicious payloads, that can negatively impact throughput. Check to see what security tools are installed on the node that is having throughput problems and temporarily disable any features that affect live traffic.

Reset the TCP/IP Stack

You know that you’re flailing when you start resetting the TCP/IP stack. Read more about the procedure in Microsoft KB299357. At an elevated command prompt, run the following command:

netsh int ip reset resetlog.txt

Reset Winsock2

To read more about the practice of repairing winsock2 corruption read Microsoft KB811259. To reset winsock, use the following command:

netsh winsock reset

Reset only the catalog with the following command

netsh winsock reset catalog

Note that if you are using Windows XP SP1 or earlier, you will have to manually reset winsock using the instructions in Microsoft KB811259.


Do you have any tips or tricks for a slow Windows network connection? Let me know in the comments below and I’ll include them here!

My Problem:

Using Internet Explorer 9 on a brand new installation of Windows 7 Professional, a user could not open certain PDFs that were located on a website. Some PDF would appear to begin downloading and then after a few moments, a simple error message would pop up:

The file is damaged and could not be repaired.
LocalEWH$@et`08b

The document could be opened if it was first downloaded and then opened with Adobe Reader. It was only a problem if IE tried to open it in a browser tab.

Oddly, various other PDFs that were accessed with the browser could open as normal.

Possible Solutions:

There are two possible solutions to this issue that I am aware of.

First:

The problem might be due to be an overflowing temporary internet files folder. I noticed that other PDFs could be viewed in IE. The ones that could open were smaller than the PDFs that were giving the user problems.

A temporary fix is to delete all temporary internet files and restart IE. A more permanent fix is to empty the temporary files folder at each exit. You can also increase the disk space available to the temporary internet files folder.

To delete temporary internet files upon exiting IE, go to Tools Menu >> Internet Options >> Advanced Tab >> Security Section >> Check the box next to “Empty Temporary internet Files Folder when Browser is Closed”

To increase the amount of space on your hard drive that IE can use to store temporary files, go to Tools Menu >> Internet Options >> General Tab >> Browsing History section >> Click the “Settings” button >> Edit the number next to “Disk Space to Use”

Second:

A second solution that is possible is as simple as updating Adobe Reader. I know, I know – it’s too simple. However, check to make sure you have the latest version. If you do, uninstall and re-install it.

In the user’s case, it was an older version of Adobe Reader. I updated it to the latest version (Adobe Reader X point something-or-other as of the writing of this post).

Other Possibilities:

There remains two other major culprits. The first being IE itself. Some have said that using one particular version of internet explorer causes the problems. No one seems to agree which version solves the problem because it seems that any version of IE going back to version 6 has experience this issue. That leads me to believe that the problem is rooted in something fundamental to IE and/or the Windows OS in a way that IE relies upon. You might want to try uninstalling IE and re-installing it.

Lastly, make sure that you have the proper updates for your installation of Windows. Another one of the potential problems that existed in my scenario is that the client machine did not have the latest Windows updates.

The Task

I have a situation on a CentOS server where I need to grant one low privileged user account the ability to run a single command as root. Here’s how I did it:

Enter visudo and the sudoers File

This probably deserves its own post, but for now let it suffice to know that if you are editing the sudoers file, you need to use visudo. It checks your syntax before saving the file which will prevent you from swearing like a drunken stevedore in between hysterical crying fits.

Run visudo as root and scroll down to the section that assigned rights to user accounts. You’ll almost certainly see see a line that says

root ALL=(ALL) ALL

That’s the beginning of the section that we’re interested in. But, what does that even mean? Let’s talk about that before we edit anything.

The syntax for the user lines in the sudoers file follows this syntax:

who host=(accounts) commands

Broken down, that means:

  • who: the account that is having its ability to use sudo privileges modified
  • host: the system that the account is able to run these sudo commands on (the sudoers file can be shared across multiple computers, so that’s when this would come into play)
  • accounts: what other accounts on the machine the user running sudo can act as
  • commands: the commands that the account represented by who can run as sudo

That means root ALL=(ALL) ALL is broken down thusly: The root account can use sudo on all computers that have this sudoers file and assume the identity of any of the accounts on those machines to perform any command that is available on them.

There are a few other additional options that can be placed on the line to further define each user’s sudo privileges. I won’t go into detail about those options (mostly because I just learned about them the other day and I’m still clueless), but you can read much more about the whole thing using its man pages.

The specific option that I’m interested in is the NOPASSWD option. You see, I need to call sudo to run a specific command as root within a script and not be prompted for a password. In that case, I place the NOPASSWD option just before the commands that I want use as root without a password. It would look something like this:

backupuser ALL=(ALL) NOPASSWD: /usr/bin/backuptool

And that is how I restricted an account’s ability to use one single command as root using sudo without a password. Any thoughts? Caveats? The sudoers file is a behemoth invention that can do quite a few different things. Let me know your ideas below.

(Today’s post is by guest author Erik Skålerud!)

Following up on Wesley’s post “What Version of CentOS / RedHat am I running?” and also to answer a twitter question from Barry Morrison – here’s how you quickly check what version of Ubuntu you’re running.

Both of the following methods should give you identical results.

Method #1

lsb_release -a

The result should be similar to this depending on your version/release:

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.4 LTS
Release:        10.04
Codename:       lucid

Method #2

cat /etc/lsb-release

Result (again, depending on version/release):

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=10.04
DISTRIB_CODENAME=lucid
DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS"

(Today’s post is by information security analyst Scott Pack!)

One of the worst problems I have with tabbed consoles is knowing exactly which console I’m working in. Sure, I can simply look at either the shell’s title, or the prompt, but I still inevitably will type a command into the wrong console. Normally this will result in something more akin to “file not found” because you catted the wrong file, rather than “Now running koan on all servers” because you did something in mcollective prod instead of test. While chatting up a guy at work about this problem we threw around changing the background color depending on which system you’re on.

This is one of those things that’s a little trickier than one might think at first blush. For instance, my first thought was to overload the ssh operator and make the changes client side. This works for me since I use cygwin as my ssh client and proxy through an aggregator (see this ServerFault answer about proxying</shill>). My coworker, however, uses puttycm which pretty much excludes any kind of local overloading. In the end, I opted to have the colorization occur on login. This made the process more portable, but it also meant that I have to make changes to my ~/.bashrc on every system.

In the end, I wanted this setup to work for both of our environments. So for my uses I wanted the background to get reset whenever I logged off a remote system, so that if I *did* manually ssh from one system to another, the background color should always be correct no matter what else I’ve been doing. So the colors needed to be set on login, which is easy, and also whenever ssh exits (I suppose I should also account for telnet/rsh but I really don’t want to). To that end, I decided to make the colorization a separate function so it could easily be reused.

It’s also worth noting that this was designed for a system running bash-3.x. If you’re in an entirely bash-4.x environment we can use an associative array, which works like a perl hash. Since I still deal with a lot of RHEL5 systems, which still uses bash-3.x, I had to code to it. To account for this I had to do some odd stuff with the array that basically amounts to magic.

Without further ado, below is everything you’ll need to add to your ~/.bashrc file to make this garbage work. With any luck, the comments are sufficient to explain.

function setcolor {
# Set up the colormap using hex codes for the colors
  colormap=( "node1:#330000"
             "node2:#003300"
             "node3:#330033"
             "node4:#333300"
             "node5:#FF0000"
             "node6:#0000FF" )
# Generate my own short hostname, i.e. turn node1.example.com into node1
  short=`echo ${HOSTNAME} | sed "s/..*$//"`
color="#000000" # Set default color to black
# Iterate through the colormap looking for the hostname. Also, some bash magic.
  for host in ${colormap[@]}; do
    if [[ ${host%%:*} == ${short} ]]; then
      color=${host##*:}
    fi
  done
# Wrap the color in the xterm escape sequences to set the background color
  echo -ne "33]11;${color}07" # Set the background color
}
# Only run the setcolor function if we are using xterm or xterm-color as our termtype
if [ $TERM = "xterm" ]; then
  export TERM="xterm-color"
  setcolor
elif [ $TERM = "xterm-color" ]; then
  setcolor
fi
# Replaces the shell title with the name of the host we are sshing to
function ssh {
echo -ne "33]0;${1}07"     # Set the terminal title to the host we are sshing to
  /usr/bin/ssh $1 $2 $3
  setcolor  # Once ssh exits, reset the color back to what it should be
}

I’m researching how to best build a new small office network for a client. It’s pretty much a greenfield project and whatever decisions I make will have long lasting effects for the organization. There needs to be a core server that holds all the primary roles; VPN, firewall, file and print, directory services, etc. That sounds like Microsoft SBS to most people, but I’m not set in my ways. I’ve looked at the myriad of FOSS based SMB server platforms, including products like Zentyal, Untangle, Artica, and even Amahi. The one title that seems to stand out among them all is ClearOS (the former ClarkConnect Linux).

As I’ve been diving deeply into the ClearOS literature, I found a portion of their website that includes a picture gallery. Of course, the usual galleries exist of the ClearOS folks mixing at industry events. Some of the galleries are provided by their adoring users. You’ll see office space, screenshots — the usual fare of user pictures.

One gallery is from a guy who made his own water cooled ClearOS server. In fact, he want hardcore and decided to make his own waterblock.

Okay, that’s cool. A guy makes his own custom waterblock. So what do you house that motherboard and waterblock in? Oh don’t be so closed minded. “In” is so not the open source way. How about screwing it, the power supply and hard drive to a piece of wood and hanging it on the wall!

But wait – it’s a water cooled system. The water has to come from somewhere. If you look closely, the tubes go into a reservoir on the bottom left side of the Tux mount. But, where does it actually come from? Does a person have to remember to fill the water up in that reservoir? Of course not! The only logical way to get water into your water-cooled system is to…

…drill a hole into the bathroom on the other side of the wall and use the water out of the toilet!!

That’s just…

…what the…

…I don’t even.

I’ll just leave the full gallery here.

EDIT: As has been pointed out by commenters Bryon and chx, it’s likely that the toilet’s reservoir is merely being used as a heat exchanger and not as a source of water for the cooling system. Photo 152 shows that this is likely a closed loop system. However, the murky water in the hoses still creeps me out.

EDIT 2: Nope, it now looks like the water in the system is being supplied by the toilet reservoir after all (thanks Kory!):

If you’ve seen anything crazier, let me know in the comments below. If you provide pictures, you can have the next blog post. =)

In closing, I think a Paranoid Parrot is in order:

Over the years, I have had to manage some client-facing servers that run Parallels Plesk on top of a Linux OS. I often need to see exactly what version and build number is running on the server. Using a shell, there are two ways to determine what version of Parallels Plesk is running.

Method 1:

cat /usr/local/psa/version

The version file is a simple ASCII text file that has information on the currently running version of Plesk. In my case, the result looked like this:

10.3.1 CentOS 5 1013110726.09

Method 2:

rpm -q psa

This method queries the RPM database for the psa package, which is Parallels Plesk. In my case, the result looked like this:

psa-10.3.1-cos5.build1012110718.17

Torn from the pages of “This is so simple I forget it even after doing it a hundred times.”

UPDATE (because I’m a noob):

As is shown in the comments by Scott Pack and Kenny Rasschaert, my hacktastic way of finding a CentOS machine’s specific release isn’t the best way to do things. It’s best to check the release package:

rpm -qa | grep release

Of course, as Chris S of ServerFault fame points out below, uname -a is useful as well. It shows the build number of your OS, which might not be quite as easy to read as searching your rpms. An example from my laptop:

uname -a
Linux Fedora1530 2.6.35.14-106.fc14.i686.PAE #1 SMP Wed Nov 23 13:39:51 UTC 2011 i686 i686 i386 GNU/Linux

One thing to note from Scott of the Pack clan (Information Security Expert of Renowned): “one of the first things I do during an investigation is try to figure out what distro I’m looking at. I usually check out /etc/issue/etc/*release*, the package name, and uname. Mostly just to figure out if they all agree.” Thanks Scott!

My original, ignoble method of finding my CentOS version was this:

cat /etc/redhat-release

However, some will contend that this method is not foolproof and that some RedHat based distributions change the release file’s name. A more robust method is as follows:

cat /etc/*release*

While auditing and revising the backup policies for some servers, I came to an older file server that I hadn’t had significant contact with in a while. I knew I had made a volume mount point from one volume to another volume, but couldn’t remember where it was.

Like a good SysAdmin, I documented it in a private wiki so the information was a simple click away. However, it struck me that I should know how to list all volume mount points on a Windows Server 2003 or 2008 box. Who knows? Maybe I made a couple of extra VMPs and forgot about it.

If I had development skills, I could use some of the valuable MSDN content to create a small app to enumerate mounted folders that uses the calls “FindFirstVolumeMountPoint”, “FindNextVolumeMountPoint” and “FindVolumeMountPointClose”. However, as far as any kind of programming is concerned I couldn’t hack my way out of a wet paper bag.

The Windows 2000 Resource Kit has three tools that deal with volume mount points or “Junction Points”. These tools also work on Server 2008 and are included by default, at least on my Server 2008 R2 machine. Those tools are: linkd.exe, mountvol.exe and delrp.exe.

Mountvol when used by itself with no switches will first show the command’s help text but then list all volumes that are available on the system as well as a what junctions connect to that volume:

 

Notice that all my drive letters are listed, but underneath one of them, the E: drive, is a path to a folder that resides on the C: drive. This is the location index files for my backup program. Super, so now I can see all the junction points that are associated with a drive letter! As I suspected, there was only one junction point.

It should be noted that I also found a scripted way of enumerating volume mount points. The script is from The Scripting Guys and is located at this Microsoft TechNet link. Not that it does not work for Windows Server 2008, but supposedly does for Windows Server 2008 R2 I’ll reproduce the script here for you benefit:

strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
    & "{impersonationLevel=impersonate}!\" & strComputer & "rootcimv2")
Set colItems = objWMIService.ExecQuery _
    ("SELECT * FROM Win32_MountPoint")

For Each objItem In colItems
  WScript.Echo "Directory: " & objItem.Directory
  WScript.Echo "Volume: " & objItem.Volume
  WScript.Echo
Next

When pasted into notepad, saved as a.vbs file and run the cscript, the output is similar, but not as well formatted as mountvol:

Follow TheNubbyAdmin!

follow us in feedly

Raw RSS Feed:

Subscribe via Email

Your email address is handled by Google FeedBurner and never spammed!

The Nubby Archives

Subscribe To Me on YouTube

Circle Me on Google+!

Photos from Flickr

Me on StackExchange: