This is Scumbag Steve, for the uninitiated (AKA non-Redditors).
This is Scumbag Steve, for the uninitiated (AKA non-Redditors).
Certainly this is a most basic task, but I am a most basic Linux user. There will come a time when you want to find out what filesystem a partition has on it. There are two types of partitions. Those that are mounted and those that are not. I’ll deal with those two main categories and the different ways you can handle them.
To find the filesystem type of a mounted partition, use the GNU df command with the -T option. Why did I specifically mention that the command has to be the GNU variety? Because the granddaddy UNIX version doesn’t have the -T option which outputs the filesystem of the partitions. If you’re running HP-UX you’re most likely out of luck with the df command. Here’s the output on a CentOS VPS I tinker with:
root@myserver [/]: df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/sda1 ext3 30963708 6571772 22819072 23% / none tmpfs 393216 0 393216 0% /dev/shm /usr/tmpDSK ext3 495844 13065 457179 3% /tmp
You can also simply run the “mount” command without any options (which is in reality running the -l option to list all mounted filesystems). Here’s the output of the “mount” command from the same VPS as above:
root@myserver [/]: mount /dev/sda1 on / type ext3 (rw,usrquota) proc on /proc type proc (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0) /tmp on /var/tmp type none (rw,noexec,nosuid,bind)
Furthermore you can choose to list out only those mounted filesystems of a certain filesystem type using the -t option. For example:
root@myserver [/]: mount -t ext3,tmpfs /dev/sda1 on / type ext3 (rw,usrquota) none on /dev/shm type tmpfs (rw) /usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0)
The above is the same as performing mount | egrep “ext3|tmpfs”
fdisk is a scary thing to wield when you realize the power that lies within. However, the -l option puts a ring in its nose so you can lead it around harmlessly. The -l option lists out a ton of informatoin about the partitions that are mounted. So much so that it can become a bit overwhelming to parse through. You’ll be happy if you know a bit about how to use grep. Fortunately, the server I’m using in this example doesn’t have many partitions:
Disk /dev/sda1: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sda1 doesnt contain a valid partition table Disk /dev/sda2: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sda2 doesnt contain a valid partition table
You could cat out the contents of the mtab file to see the status of currently mounted filesystems including their filesystem.
root@myserver [/]: cat /etc/mtab /dev/sda1 / ext3 rw,usrquota 0 0 proc /proc proc rw 0 0 none /dev/pts devpts rw,gid=5,mode=620 0 0 none /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 /usr/tmpDSK /tmp ext3 rw,noexec,nosuid,loop=/dev/loop0 0 0 /tmp /var/tmp none rw,noexec,nosuid,bind 0 0
The /proc/mounts file is very similar to the ‘mtab’ file, but is supposedly more up to date. In fact, some people symlink /etc/mtab to /proc/mounts. Cat it out and compare it to the mtab file if you’d like.
If you have a drive that is not mounted, and want to know what the filesystem is, short of using some forensic analysis tools, you’ll need to actually mount the device. When given a block device and not passed any filesystem types, mount is run in ‘auto’ mode and will attempt to discern what filesystem is on the device. Apparently it will first try to mount it as one of the filesystems in /etc/filesystems. If that fails, it then tries all filesystems that are located in /proc/filesystems (quite an array!). The /proc/filesystems file is all of the filesystems that the kernel knows about including any modules that are loaded.
Once the drive is mounted, you can then run any of the above commands to find its exact filesystem. If mount wasn’t able to discern what filesystem was on the drive, you’ll need to perform some kind of offline analysis of the drive which that is beyond the scope of this post.
If you have a disk that usually auto-mounts but is not currently connected, you can still check to see what the filesystem is at least expected to be when it is available to mount. Look in the fstab file:
root@myserver [/]# cat /etc/fstab /dev/sda1 / ext3 defaults,usrquota 1 1 /dev/sda2 swap swap defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 proc /proc proc defaults 0 0 /usr/tmpDSK /tmp ext3 defaults,noauto 0 0 /tmp /var/tmp ext3 defaults,usrquota,bind,noauto 0 0
Notice that this only works if the disk that you are interested in is set to auto-mount when your Linux machine boots. If you look above at my example, my sda2 partition is in the fstab file but not the mtab file. Sda2 is not connected but I can still tell what the filesystem is expected to be when it is available to mount; the swap filesystem.
That’s all of the ways that I’ve been using to find out a partition’s filesystem type. I’m still a bit skittish about using fdisk and cfdisk to check filesystems, but I tend to shy away from using potentially destructive commands for information gathering. =) I also heard it was possible to use ‘blkid’ to find a block device’s filesystem, but was not able to get that to work. What other ways do you know? Add your preferred methods in the comments below and I’ll periodically update the blog post with the information.
Recently I had a little issue with my laptop’s filesystem. I’m running Fedora 14 with ext4 partitions and had to run fsck to clear it up. Or at least, I had to run some mutation of fsck. The plethora of options that were available to me made my head spin. Let’s take a look at the typical options that are available to someone running a Red Hat implementation of Linux that uses an ext file system:
I know that I have a ext4 filesystem, so I probably want to stay far away from the other fscks, right? Let’s find out.
Before I go any further, let me disclaim my findings, assumptions and conclusions by saying that as of the writing of this post I am new to the Linux operating system. I have only been running it as my main OS for about six months with sparing exposure to it for just a few years prior to that. While I’ve had some great teachers and resources to draw on, I’m still a Linux nublet and what I am about to say may or may not be entirely accurate. I’ve done the best research that I can do at this point in my career, but if anyone has better insight into the topic please straighten me out in the comments below, an email or a blog post.
Furthermore, my findings are those of someone running a Red Hat / RPM based OS. Red Hat seems to do their own thing sometimes (great, I’ve switched from a Microsoft OS to the Linux version of a Microsoft OS!) and that can cause some huge YMMV moments.
If you tie your system into a knot as a result of anything I’ve said in this post, I’m truly sorry about that, but you have been warned. Your mileage may vary, read the fine manual and the picture on the box is enlarged to show texture.
Or at least, that’s what it felt like as I tried to unravel this mystery. Let’s take a look at some of the evidences that I discovered in my search to find out the difference between the various fsck commands.
The first thing to do is find which programs are identical. There’s no point in doing any other comparisons on programs that are duplicates. e2fsck, fsck.ext2, fsck.ext3 and fsck.ext4 are all hardlinks to the same inode. They are the same file. However, fsck is a different file with no hardlinks. It’s its own command that can be found nowhere else.
When referring to the family of hardlinked fscks, I’ll simply refer to e2fsck alone since that seems to be the more common command that can be found on all Linux distributions. fsck.ext[2-4] seem to be Red Hat permutations that are included for policy reasons. Something about incompatible binaries.
None of the fsck commands listed above are symbolic links, however I found two commands in the extended fsck family that are. I’ll include them here for thoroughness. fsck.msdos and fsck.vfat are both symbolic links to dosfsck. Fortunately for me, I’m not going to be bothering with those mutations of the fsck gene pool.
In spite of some clarity being added to the sitution by the above link sleuthing, some confusion is injected when looking at fsck’s command help:
[Me@MyPC ~]$ fsck help fsck from util-linux-ng 2.18 Usage: fsck.ext4 [-panyrcdfvtDFV] [-b superblock] [-B blocksize] [-I inode_buffer_blocks] [-P process_inode_size] [-l|-L bad_blocks_file] [-C fd] [-j external_journal] [-E extended-options] device
Wait, did you see what I saw? “Usage: fsck.ext4″ – okay, so why is fsck apparently fsck.ext4 in disguise? Why, if fsck isn’t hardlinked to anything, does it seem to think it’s fsck.ext4? As of this post, I don’t have an answer to those questions. My only clue is from the answer to a question I asked over at unix.stackexchange.com.
Apparently Red Hat based distributions are a bit unique as a result of RH’s insistence on consistency and compatibility. (Or as I seem to recall a Slackware fan saying once: “If you learn Red Hat you know Red Hat. If you learn Slackware you know Linux.”) Nothing in that Stack Exchange post said anything about fsck being replaced by Red Hat’s preferred binary, but the situation leads me to believe that Red Hat may have replaced the fsck command with their own preferred version. Perhaps. This is all very foggy to me and I’m still seeking answers.
e2fsck’s help simply states that is is… well… e2fsck. At least that’s not an additional quandary to have to figure out.
In essence we only have two fscks to deal with: fsck itself and e2fsck. (I’ll ignore the strange “fsck thinks it’s fsck.ext4″ drama for now) diff’ing the two man pages comes up with some interesting information. Here are some points that I came away with from the comparison:
I also noted that the fsck binary is considerably smaller (30K) than the e2fsck binary (190K).
Let’s get one thing straight: fsck and e2fsck (and thereby any fsck.ext* permutation) can handle ext2, 3 and 4. So I feel confident in saying that you’re safe using either command on any ext-based filesystem. If you need to muck about with superblocks, e2fsck seems to be the tool for you. In fact if you need any of the other features that e2fsck uses, then you know what to use. Which features are those? You’ll have to research those differences on your own.
Personally, I now use e2fsck for everything ext related. It seems to be the best tool with the most options that is linked to by most other ext based filesystem commands.
What’s the deal between all the different hard links and the help of fsck showing that it thinks it’s fsck.ext4 (which is really e2fsck)? I have no idea. If you have better insight into the situation, I’d be very grateful for any kind of clue thrown my way. If you’d like to, you can head on over to the unix.stackexchange.com question that I asked and possibly garner some points for an additional answer. Or you could ask your own question and then answer it (since this article goes in a different direction than that specific question). Of course guest posts on my blog are welcomed or links to your own blog post on the topic are available.
Oh, and I’m rather proud that I made a whole post about fsck and didn’t once make a tawdry joke implying it’s visually similar cousin. =)
In Windows Powershell, deleting items with Remove-Item causes a confirmation prompt to stop a script from functioning. The prompt says:
The item at [path] has children and the Recurse parameter was not specified. If you continue, all children will be removed with the item. Areyou sure you want to continue?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is “Y”)
Run Remove-Item with the -recurse switch.
Remove-Item C:pathtofile -recurse
Now your script will run with no intervention necessary.
There is considerable confusion about how to supress confirmation prompts with Remove-Item. It’s rather silly since the solution is right in the confirmation wording (I am guilty of being silly since I didn’t see that at first either). Here are some false ways of performing this task along with why they are false:
Using the -Confirm parameter.
Some people will suggest that you use the following line:
Remove-Item c:pathtofile -Confirm:$false
However, -confirm is set to $false by default and furthermore it has nothing to do with the warning above. The -confirm parameter “prompts you for confirmation before executing the command.” In the above scenario, I’m not being prompted before running the command, I’m being prompted to confirm the deletion of a file that has child objects.
For more information abuot the -confirm parameter, run the following command in a PowerShell prompt:
Using the -force switch
This does not “force” the Remove-Item cmdlet to delete files in the face of a confirmation prompt. This forces the deletion of hidden and read only items.
I’m a closet storage geek. I don’t have a lot of clients that need really cool storage products, but I wish I did. When I can, I try to read up on as much information about the storage world as possible. I’ve come across three fascinating players in the flash storage world that any SysAdmin should keep their eye on.
Pure Storage has a rather daring claim. Their arrays are claimed to be ten times faster, smaller and more power efficient than disks.
The product touts its software as much as (if not more than) its hardware. The software is called the Purity Operating Environment and performs global deduplication, compression and thin provisioning to make data storage more efficient. From their site:
Purity is a fully-virtualized storage operating environment, which abstracts individual flash devices into a single unified storage pool and optimizes data placement across the pool. Purity’s data layout is aligned with the erase block size of the flash, reducing flash write amplification to extend flash life and improve performance. Moreover, Purity’s data structures are “append only”, meaning that all writes (new data, updates, parity re-builds, recoveries) are coalesced into write segments that are always placed somewhere new, improving performance and extending flash life. Finally, Purity implements a set of active background flash management services (wear leveling, deletion management, performance optimization, integrity/health checking and automatic healing) across the global pool to ensure the reliability of both the data and the underlying flash.
As of this post they only offer two appliances. A 2 controller / 2 storage shelf version and a 1 controller / 1 storage shelf version. Pictured below is the single controller / shelf model:
The Pure Storage FlashArray is built on a flexible, scalable, redundant, highly-available hardware architecture, designed to allow Pure Storage solutions to scale from single application to consolidated cross-data center deployments. The Pure Storage FlashArray implements a node-based design, with clustered controllers and storage shelves. This allows for the independent scaling of storage performance (controllers) and storage capacity (storage shelves). Configurations can range from 10s to 100s of usable TBs of flash storage, for both HA and non-HA configurations.
The Pure Storage CEO is none other than Scott Dietzen former president and CEO of Zimbra. The CIO is John Colgrove who worked for Veritas. Also on board is Michael Cornwell who worked on flash chips for the iPod and iPhone while at Apple. He has also worked on flash-based products at Sun.
Pure has attracted some important investors including Mendel Rosenblum and Diane Greene, the couple who founded VMware. Also funding Pure Storage is Greylock Partners which brings Pure Storage’s total amount of VC funding to about 50 million dollars.
I foresee big things for Pure Storage and hope to work with of their equipment some day.
Fusion-io doesn’t actually make server appliances. They make flash devices that interface with a computer through the PCI bus. They make three major hardware products, the ioDrive, ioDrive Duo and ioDrive Octal (the latter shown below).
The Fusion ioDrive can supply 160GB (at 123,000 Mixed IOPS [75/25/r/w]) to 640GB (at 74,000 Mixed IOPS [75/25/r/w]). The ioDrive Duo can supply 320GB (at 238,000 Mixed IOPS [75/25/r/w]) to 1.28TB (at 150,000 Mixed IOPS [75/25/r/w]). Finally, the ioDrive Octal can supply 5.12TB at 729,000 75/25 Mixed IOPS (512 B).
Fusion-io went public in the summer of 2011. They have been invested into by Samsung and are working with the major flash chip manufacturer Toshiba.
They make a software platform called ioSphere that allows you the following features (taken from their website):
Also available is the software tool known as direct cache. From the Fusion-io website:
Fusion’s directCache transforms ioMemory into a transparent, auto-tiering, acceleration device to cache any block-based storage medium whether it is a disk array, SAN, direct attached storage or iSCSI target. directCache places the caching software in the server to deliver lower cache latency. With directCache, Fusion-io customers can have Terabytes of cache acceleration at their fingertips to speed performance of any backing store. This add-on module for ioSphere integrates tightly with Fusion’s Virtual Storage Layer, a flash-optimized OS subsystem, to deliver immediate application workload performance improvements.
They look like a great tool for anyone doing CAD work or video editing. I’m not sure about building out a server full of them though. Perhaps I’m spoiled by the pretty boxxen that most vendors will supply.
The former CEO of Fusion-io was Donald Basile. Why is that important to Violin Memory? Because he became the former Fusion-io CEO when he left to head up Violin Memory. Unlike Fusion-io, Violin Memory is focused on datacenter products. With an impressive catalog of appliances, I won’t go into depth on each of them here, but will give a brief overview (using some or all of the text from their product website):
3200 Flash Memory Array - A redundant, modular 3U memory array that scales from 500GB to 10TB SLC NAND Flash
vCACHE NFS Caching - The NFS caching system is built on flash memory arrays in conjunction with their vCACHE software. These vCACHE NFS Caching systems increase the size of the caching available to applications, but also reduce the cost per GB of the cache by more than 70%. Unlike an internal cache, the vCACHE system can also support 200K operations per second. vCACHE systems enable the entire active data set to be stored in cache. This may be 5%, 10% or 20% of the total data stored in the filers. By caching the entire data set, the full application speed-up of 5x to 30x is enabled for both IOPS and latency. With caches of 1% or less, the speed-up is typically a small increase over the standard disk speed.
3140 Capacity Flash Memory Array - A redundant, modular 3U memory array that scales to 40TB of Capacity Flash. It scales to more than 500TB in a rack with performance over 1.5 Million IOPS. The Violin 3140 includes hardware-based flash RAID across hot-swappable memory modules to provide data protection and high-sustained IOPS.
SAN Attached - The Violin Memory Arrays can be clustered via PCIe with one or more Memory Gateways that provide connections via an combination of Fibre Channel (8Gb/s or 4Gb/s), 10 GbE (iSCSI or FCoE), or InfiniBand. A single Memory Gateway can support up to 4 Violin Memory Arrays, 400K IOPS and over 3GB/s. Through striping, each LUN on the system can get the full bandwidth and IOPS capability of the cluster. LUNs can range in size from 1GB to 120TB!
DRAM Array - Mmemory appliances made to provide a platform for provisioning DRAM as a large scale Tier-Ø storage infrastructure.
Violin provides “vCLUSTER Management” which is their storage management software to keep an eye on their products in your environment.
Flash storage is going to inevitably replace spinning disks, and it appears that with the above offerings, especially Pure Storage, it’s going to happen in this decade. Do you have any experience with the above systems? Or perhaps another upstart flash storage vendor? Let me know in the comments below.
Today I’m at the Phoenix, Arizona leg of Interface 2011. Once again, the event is billed as being in Phoenix, but it’s really somewhere rather far away from “Phoenix” proper. Technically it’s at Ft. Mcdowell at a Radisson Hotel that is adjacent to the Ft. Mcdowell Casino. For more information on the conference, check out the Event Details page.
There are three theaters each with their own track. So far, I’m liking the following selections:
After the keynote, I believe there will be the obligatory swag giveaways. I hope to score something this time to make up for bitter defeat at the AZVMUG event two weeks ago.
This post is auto-posting at 7:25AM Arizona time. Registration for the event starts at 9PM, but I’m arriving fashionably early at around 8:30 or so. Perhaps I can stake out the wifi and plan my attack on the vendors. More news as events warrant.
9:32PM – Yes, that’s PM. The conference was an absolute whirlwind. Sessions were jammed end to end with only ten minutes in between. Often speakers went over time and I had barely enough time to visit the restroom or get a quick chat in with a vendor of interest. There was a ton of vendors, and they were actually pertinent and interesting. At least the ones that I saw. I’ll attempt to recap what I learned. Vagueness is to be expected.
Enabling Mobile Unified Communications: Don’t go away! It’s not as crumby what it sounds. The first half was a marketing spiel about blah, some blah and even a bit of blah. However, the second half was given by a systems engineer that specifically spoke about Meru Networks’ wireless technology. I was not expecting a wireless network talk, but I was impressed with what I saw.
Apparently Meru Networks deploys WAPs all on the same channel, however they circumvent noise by some kind of timing mechanism. With a single channel and I believe cloned MAC addresses, among possibly other things, a deployment appears to clients as one access point and only one access point no matter where they go. This takes the burden of client migration among WAPs off of the client and onto the wireless infrastructure. It allows clients to be handed off to new WAPs when signal starts to merely degrade, rather than when the signal drops. All in all, it looked like a technology worthy of looking deeper into.
Of interest is this bragging video of their showing 500 wireless clients streaming 100Mbps and then reassociating with the network after a total WAP reboot in 3 minutes. The engineer was quick to point out that the latest product can do it in 90 seconds.
PCI Myths and Mistakes. The talk was well done by a PCI QSA from Accuvant. I was impressed by his continual insistence and proving that PCI DSS is not an IT issue. It is a processes issue that uses technology at certain points. If the IT department is who has been handed the PCI compliance project at your company, then something is terribly wrong. We were introduced to the IT Unified Compliance Framework which is a conglomerate compliance framework that hits many of the different standards, PCI being one of them. ISACA was also given a nod for having different “feature matrices” that compare the different standards to each other so that you know that if you’ve got PCI covered, what few things will remain to gain compliance in a different standard.
SIEM 2.0 – See What Youre Missing. Jim Schaeffer, president of JCS and Associates (which is basically a system integrator), spoke. The talk was nebulous, and I wasn’t sure what was being sold or what was being talked about. This wasn’t a talk about SIEM as a concept. This was what appeared to be a product rundown of several, seemingly unrelated products, that were offered as an integrated bundle on an appliance. The appliance then collected, stored and indexed all of the logs that those products used. The SIEM part of the talk was about how all of the logs that each of the major tools created were cross referenced in a superior way so that you could have better insight into your network. Basically, they take software products that they prefer and have proven and then make sure that the various security products have common threads that allow them to be tied together with the infamous “single pane of glass.”
Their hardware appliance was advertised to take all logs and correlate the data, rather than simple only store the small percentage of events that seem to be security related. They take different products and integrate them onto the appliance and add their own high-level view.
The first product that is used in the usage of their appliance is SafeEnd. Safeend is a suite of software that works on Macs, Windows that allows you to monitor physical ports and data ingress and egress as well as encrypt information on media. It also can show who is on the network with which device an on which wireless network. It’s quite an array of monitoring and controlling features. It’s one agent that is installed and the various flavors and features are licensed with a key.
Ctera is next on their list of integrated products on their appliance. It’s a cloud storage system that is based on heavy, end to end encryption. You have a local store and then a cloud store. You may have heard of the Ctera CloudPlug. They also make larger devices for SMBs.
Next, the appliance tackles the issue of desktop management. It uses Panologic devices as a nextgen thin client. The Buzzword is that it’s a “no client”. There’s no moving parts, no storage, no CPU, no OS. It’s not a traditional PC in that a thin client is just a PC that’s stripped down. There little cubes that are simply a place to plug in a video cable and four USB devices. It’s best use in networks that are sub 10ms. For offsite use of datecenter VDIs, there’s a serialized USB thumbdrive that you can plug into any PC, no matter how hosed it is with virii, and it will connect to the VDI instance as a secure session. It relies on a hypervisor like VMWare, Citrix and/or HyperV. Panologic uses it’s own connection broker, of VMWare View / Citrix XenDesktop. The DVM provisioning and hypervisor are the same as any other VDI implementation.
The last product is called MailScape that is a monitoring solution for Exchange, Active Directory, ActiveSync, BES. Purportedly better than SCOM.
Basically, this session wasn’t about SIEM as much as it was a pitch for JCS and Associates’ own hand-rolled appliance mash-up. It looks decent. It really does. It takes these products, ties them together and allows for some seemingly in-depth reporting in a single place to facilitate SIEM so that you can make sure everything has been working to your specifications. However, this wasn’t anything that was precisely groundbreaking. It’s a mash-up. A good looking mashup, but still… it wasn’t about SIEM as much as it was about an appliance that provided a ton of features and then had a SIEM component in it.
12:15 to 2:45 FOOOOD! Gabbing. Vendor surfing. I ate a hearty meal, robbed the goodie bar and then sat around and talked with some other participants. I spoke to two in particular that had different companies with symbiotic relationships between them. They were local IT, security and etc. providers. We shared stories (some really, really wild ones that shall not be reprinted) and business tips. I learned a lot from them. I then vendor surfed, dropped business cards, asked for info and again learned a ton about the local business climate.
Keynote: Chris Roberts of One World Labs gave a security / hacker themed keynote. He was witty and a good presenter. The presentation was ho-hum with plenty of the same “Oooo” and “Ahhhh” stories of “hacking” exploits that could have crippled businesses, enterprises, prisons, power plants, military installations and the like. I’m not taking anything away from Chris, however, these kinds of talks are a dime a dozen among security conferences. Then again, nothing original was promised, so that’s fine. It causes one to stop and think about their own security practices. It was a good refresher talk.
You can see a few minutes of the same talk given at an earlier conference here:
You can see him in a drama filled video complete with crying women and violins here:
Afterwards there was a raffle for tons of vendor prizes. I won a Nintendo Wii that included an extra wand, an extra “Nunchuk” and a copy of AMF Bowling. I was going to sell it on eBay for a few dollars, but decided to give it to my mom to play with. She’s been eyeing one for a while now.
All in all it was a fine vendor driven conference. Nothing groundbreaking, nothing terribly disappointing. I learned about new vendors, was re-acquainted with some old ones and got a Wii. What more can one ask for?
Want to know how to find your Windows Product ID? I won’t tell you right away. Keep reading and I’ll clear up some common misconceptions that you might not know you have.
Recently I dove into the topic of how to discern what installation media was used to install Windows. It’s possible to find that information out using the Product ID number. The search engine results for anything Windows Product ID related were disconcerting. There is a lot of confusion around what the Product ID is. Before I show you how to find the Product ID, let me tell you what it is and what it isn’t.
The Windows Product ID is not your Product Key (also known as your License Key). The Product ID is not the code that you type in to install Windows. A Product / License Key looks like the following:
That’s five sets of five numbers separated by dashes. To reiterate, the Product ID is not the above number. The above number is the Product Key (aka License Key).
The Windows Product ID is a 20-character number that follows this form:
That’s a five digit number, followed by a three digit number, followed by a seven digit number and finally a five digit number. The Product ID is a number that is generated based on the Product Key (the thing that you pay money for and can install Windows with). The Product ID is then combined with a “Hardware ID” that is generated based on the types of hardware that you have in your PC. Those two things combine to form the Installation ID. When you activate Windows, the Installation ID is associated with the Product ID.
Apart from being an internal number that Microsoft uses to make sure your copy of Windows is genuine, it does have a few surprising uses. You can determine what installation media was used to install Windows from it. You can also figure things out like the Microsoft Product Code (MPC) for the installation which tells you the locale and even if it was an upgrade or not.
Finally we come to what you probably wanted to see all along. How to find the Windows Product ID. Ultimately it’s located at the following registry key:
reg query "HKLMsoftwaremicrosoftwindows ntcurrentversion" /v ProductId
Notice that you must use quotes since there is a space in the key’s name and you have to run the command in an elevated command prompt. When you have your Product ID, you can then do some interesting things with it like learn what media your installation came from and if it was an upgrade or not.
Aside from that, the Product ID will probably never be something that you have to write down or keep track of.
EDIT: As commentor Brian points out, the media type for all Vista and beyond installations is the same. Media type was only different in XP and prior versions of Windows. The license key that was used to install Windows is what will now determine the channel ID. I’ve had a hard time tracking down documentation on this subject, so it’s a bit fuzzy. However, I still remain skeptical that the media files between OEM, TechNet and MSDN are completely identical in Vista and beyond. I have no proof of this though, and it remains to be tested if my suspicions are true.
Far too many times, I’ve troubleshot a Windows PC and come to find out that the image was made from media that did not match the license that I was trying to work with. Unfortunately, I know many IT Professionals that use MSDN or TechNet images in a pinch for production machines, and rationalize that “It’s the same bits, and I really do have the license for it, I just don’t have the right media at this moment.” That’s true, to an extent, but it’s still completely illegal and seems to have a technical detriment at times as well.
While Vista and beyond theoretically use the same media regardless of TechNet, OEM, Retail and etc, I still have my doubts. Nonetheless, the license key used to install Windows is still very important. Many times I have suspected that a TechNet or MSDN license was used to activate Windows in a production environment, but had no knowledge of how to discern the truth of the matter.
Was this PC installed from the MSDN image or license? Maybe an OEM disc that someone had laying around? Perhaps a Volume License image? I suspected that there was a way to tell, because in many instances certain Windows features didn’t behave like I thought they should when the image was from TechNet or MSDN. There seemed to be a way that Microsoft “just knew” that the image wasn’t from the media type that it should have been.
While I don’t know about any tell-tale signs deep in the Windows bits, I now know that there is a high level way of discerning a Windows image’s origins. Thanks to this ServerFault question “Which media was used to install Windows 7“
I saw it and decided to launch into an investigation. I had had that very question running through my mind many times, but could never get to the bottom of it. In fact, after sifting through a mountain of search engine results to try and answer the ServerFault question, I still couldn’t find an answer. I favorited the question with the hopes that someone would answer it in the coming weeks or months. Fortunately, I didn’t have to wait that long. The question’s author found the answer just a little while later.
The crux of the matter is within the Windows Product ID and how one interprets the numbers. A Windows Product ID looks like this: 12345-123-1234567-12345. Notice that the Product ID is not the Product Key, the latter being what you are essentially paying for when you buy Windows. Searching for information on how to find the Product ID comes back with plenty of misguided articles that confuse the two. Here’s Microsoft way of finding the Product ID for some of the most popular iterations of Windows.
You can also find the Windows Product ID at the following registry key: HKLMSOFTWAREMicrosoftWindows NTCurrentVersionProductId
Oddly, I found the Windows Product ID at this seemingly unrelated key: HKLMSOFTWAREMicrosoftInternet ExplorerRegistrationProductId
The major source of information for how to interpret the Product ID number is from a free tech support community (that I had not heard of before this topic came up) called LunarSoft at their Windows Product IDs page. Searching around for other sources of Windows Product ID information finds that everyone seems to be gathering their information from them, even answers on Microsoft’s own support forums will link back to their Product ID page. If anyone knows where the official Microsoft information can be found, let me know.
The key part of the Product ID that is important for discovering what image was used to install Windows is the “Channel ID” – the three digit number that is the second number in the four number PID. In my case, my Channel ID is 292, however that number isn’t on the list at LunarSoft. Apparently, while LunarSoft’s list is great, it is a bit dated. You can see this forum thread that makes mention of the outdated nature of the list.
There is still some confusion, but apparently 292 stands for Windows Ultimate Retail, which stands to reason since my installation is Windows Ultimate installed from a disc I scored for free at an official Windows 7 launch party in Pittsburgh. I think the list of Channel IDs is in need of some confirmation, but I can’t find any official documentation on the subject. However, between LunarSoft’s Windows Product ID page and the forum thread over at MyDigitalLife I think you should be mostly taken care of.
Once you have your Channel ID, compare it to either LunarSoft’s list or the MyDigitalLife forum post and you’ll have a pretty good idea of what media was used to install Windows. I’ll be on the look out for any official and up-to-date documentation on the Channel ID in the mean time.
Do you know of a better way? Have any insights on official documentation? Let me know in the comments below.
If as an IT professional you find yourself consistently using Microsoft Windows, you may be interested in the 2011 TechMentor conference happening in Las Vegas on October 10th through the 14th. There are 8 tracks and over fifty sessions to choose from. There’s even a VMware vSphere track tucked in there as well.
However, the reason I’m writing this post and rushing it out the door is because the Early Bird Special is about to expire. You can save $200 off the charge if you register before September 16th. That’s only a day away. Alas, I could have posted this sooner, but allowed other things to get in the way.
Here are the available tracks with links to the full session list for each:
(Apologies to Hyperbole and a Half)