I have made two changes to my computer-based behavior in the new year that have regained an enormous amount of time and changed the way I do things in a positive way. I’d like to share them here for the benefit of others and to hear from you what other things you’ve done with similar results. Here are my two tips:

I Don’t Save Browser Tabs

It starts innocently enough. You have a project or an idea and you start machine-gunning the control-t key combo. New tab! New tab! New tab! You do some Google searches on various keywords to start your data gathering efforts. You middle or command-click the search result links to open those pages in new tabs. You scour those pages, absorbing their information and further middle / command-click links on them. Treesort on a Sandy Bridge Xeon couldn’t find the root node of your tab history before the heat death of the universe. In fact it would probably make a measurable contribution to the universe’s heat death.

Before you know it, your 8GB workstation is slower than an anesthetized tree sloth and your browser is hugging its knees while rocking itself in a corner. Even worse, you might have multiple browser instances, each with their own standing army of open tabs. “Yeah, Chrome window #1 has all my Exchange migration tabs. Chrome #2 is all about the Cisco problem I’ve been having. FireFox is for the MikroTik research and I think Safaris #1, #2, and #3 can be blamed on Reddit.”

Starting any web browser on your PC takes longer than booting Vista from punchcards. A browser crash that loses your tabs requires your co-workers to quickly recall how to use the portable defibrillator unit that HR put in the server room the last time vSphere had a major version upgrade to perform.

I know all of the above well, because that person is me. Or rather, was me. Not anymore. I no longer save tabs. Closing a web browser loses all tabs and opening a web browser brings up a fresh, single tab populated with an innocuous home page (i.e. not Reddit). This has several positive side effects:

First, it forces me to collect information in a more orderly fashion. Every time I close a browser, I know that all information is gone for good. This has changed my information gathering habits to now put information that I need to retain for any length of time, either as a bookmark or in another information gathering tool like OneNote or EverNote.

I was a OneNote fan for a long time. I still am. However, I switched to Linux two years back, and then moved to a Mac a few months ago. The switch to Linux required a virtual machine to keep OneNote around, but that was a little cumbersome. Then moving to a Mac, I wanted to integrate into that OS more. Thus, I switched to EverNote (Yay cross-platform!). Doubly helpful is their cloud storage of notebooks that I can see and use on many other devices.

Whatever you do or use, I would venture to say that not saving tabs in such a volatile thing as a web browser will get you to reconsider your main method of information storage. It will likely improve your research habits and make information easier to store, find and annotate. It’s starting to for me.

Second, I’m beginning to use bookmarking better. Caused by point #1, I now look to the cloud bookmarking features of Safari and Chrome. I’m now making better use of browser bookmarking in general. An improper cataloging system can be a huge burden, just as bad as tabs that go on forever. However, I’ve found that research-based links are better organized by date whereas links that are generally used and useful are better categorized by topic.

So the big pfSense project I’ve been agonizing over for the last few days? (Okay, weeks) Those tabs and links should be put in a dated hierarchy. I start with year, then month, then week or day. I can then think “I was working on pfSense… January. I think it was January,” and have a good shot at finding those links later. Putting my myriad of pfSense links that are specific to the problem I’m having into a general “Networking > pfSense” folder would be futile and cluttering. In my experience anyway. Your tastes might vary, and indeed my own preferences might change as I continue to use this method.

Nevertheless, using bookmarking better is a good thing, and in my opinion using cloud bookmarks is even better because you’re not out of luck if you tend to spread work out over multiple computing devices. I’ve got a laptop, a desktop, a phone and a tablet. Bookmark syncing can be very helpful in that scenario.

I’m Unsubscribing

There was a time when I used mailing lists and newsletters as a means of not having to remember things. If there was a product or company that I liked the looks of, I’d subscribe to their marketing newsletter to gradually learn more about them. Would that feature that could really help me really come out in the next six months like the sales guy said? (Ha!) What other products do they have that I didn’t know about? Are there little training tips that I’m missing out on? Maybe I just want to be reminded about this company a few times so I don’t forget them when I need their product.

(Believe it or not, that last reason has actually been a useful tool for me. I remember things based on repetition [as do most people] so if there’s a vendor / product that has the potential to be useful, I’ll sign up to their newsletter for the sole purpose of seeing their brand and products over the course of a month or two. Then when I’m reasonably sure I’ll remember their name and products, I unsubscribe. I don’t think that habit needs to change change insofar as I remember to unsubscribe at an optimal time.)

I also used to sign up for general industry news outlets. Smart Briefs, IEEE newsletters, Technorati, Gizmodo, Garner, Forrester, Techcrunch, YouTube channels, yadda yadda yadda. There was a time when I would receive about a 100 to 150 emails a day, most of which were glanced at for five seconds or less. Or worse – they were immediately discarded in annoyance. My reasoning, especially for the news briefs, was that I’d scan them for 30 seconds, see if there’s anything I think I should know about as an IT professional (or just an individual), and then read the article.

(Side note: That thought process also encouraged the problem I mentioned above concerning browser tabs. That Smart Brief on IT management? Three tabs would spawn. The top stories on LinkedIn? Two tabs. Gawker? Another two. Or Twelve. Then I’d have, quite literally, 10 to 20 tabs of stories that I thought would enrich me. In reality, I’d recognize that it would take at least an hour to get through them all, and I can’t spare that amount of time for reading what amounts to virtual newspaper articles. So I’d skim them. The end result? My concentration discipline was weakened, I never deeply absorbed any of the information, and I got dumber.)

My email inboxes (yes, plural) would consume two or more hours of my day. That wasn’t even counting actual correspondence with anyone. I was essentially just managing text files. Two horribly unproductive hours. Two shameful hours of not just being unproductive, but getting stupider. I was a vile, filthy, wretched, untoward, worm of a… well, okay, I just let a bad habit get worse and didn’t question it until the habit got very wasteful.

Summary

Question your use of time. In fact, why are you reading this? Should you be doing something else? I’m not trying to nag you like your mom, but am I actually helping you? Is that my blog’s general tendency or do you often think “Why did I just drag my eyeballs across that?” If you get this blog in your inbox, consider unsubscribing. If it’s in your RSS feeds (I stopped reading my RSS feeds long ago for similar reasons), make sure I’m one of the more useful ones. If you’re on my site, reconsider coming back.

Seriously.

If it’s not helping you, it’s hurting you. If I’m not consistently enriching you, get rid of me. Same goes for anything else. Tabs, emails, and more. Ditch it.

Any other helpful hints to gain time back? Even during the writing of this post I thought of a few other time vampires that I could write about, but I didn’t want to make the post any longer than it already is. Let me know in the comments below, and feel free to contact me if you want to write a post about it and share it with a broader audience. Or write on your own blog and I’ll link to the post. Time is a limited resource – use it wisely.

If you don’t know about Talentopoly, well, you should! It’s a growing community of sharp technical workers from all walks of IT life. There’s a vibrant culture of sharing great articles that I for one would never come across on my own. I’ve discovered quite a few blogs, websites, and resources that I wouldn’t have known about otherwise. There is also a place to ask questions and start discussions.

A relatively new feature of the site is their job board. Think of it like Craigslist with a vetting process that weeds out the jokers. On the job board you can find both full-time jobs as well as smaller “gigs” that can allow someone who already has a job to pick up some side work. In their own words:

We want a place where developers & designers can find full-time jobs and paid gigs on the same site.

So we’re doing what any good hacker would do, we’re building it.

We’re personally vetting each job and gig listed and are committed to keeping the quality high and the noise low.

Yes, Talentopoloy and their job board does tend to favor developers over administrators, but I believe that’s slowly changing as more administrators participate. It’s a new service so there’s not a ton of job postings there yet, but I thought you might like to get in on the ground floor. I’ve even considered turning to it when I needed some quick work done on a project. You can get emails from their low traffic newsletter that lets you know of any new jobs that have been posted.

If your employer is looking to post a job description, have them check out the Talentopoly job board where thousands of highly qualified developers and administrators congregate. They might fill their position without having to go through a long recruitment process with outside agencies.

Check out the Talentopoly job board, and let me know if you need an invite to the site. It’s still invite-only to keep the quality high. I’ve got 5 invites as of this posting and it’s first-come, first-serve. If you’re interested, email me and I’ll send an invite to the address you email me from.

PowerShell is a powerful tool. It’s also very different than the cmd and VBScript way of doing things that we’ve suffered through since the beginning of time. About a year ago, I set out to rewrite our custom imaging scripts and AD automation tasks that were mostly VBScript using PowerShell. I took on this effort in part (ok, in whole) as an excuse to learn PowerShell. If you haven’t taken the plunge yet, I’ll introduce some basics and get you set up to start chugging along on your own.

First thing is first, you need the Active Directory PowerShell modules and PowerShell 2.0. You need to grab a copy of the Remote Server Administration Toolkit (RSAT) for your OS and architecture from the Microsoft Download site. After you install it, enable the Active Directory Commandline Tools feature. If you’re on Windows 7 or later, you’re done. If you’re on Vista, please accept my condolences and grab PowerShell 2.0 or later from Microsoft Update.

The next thing that you need is to be able to actually run the cmdlets from the AD module against your domain controllers. If you’re on 2008 R2 or later, this is enabled by default. If you’re on Server 2008 or 2003, you need to install the Active Directory Management Gateway Service on at least one Domain Controller in each domain that you wish to manage with PowerShell. Some of the prerequisites for this may require a reboot, so plan to do this during a maintenance window.

Ok, phew. So now we can actually start using PowerShell to manage AD objects! Fire up a PowerShell prompt and run import-module activedirectory. That will load the AD module and we can start having fun.

Do you want to see all user accounts in your domain?  get-aduser -filter * will do the trick. You’ll notice that each user is presented like this:

DistinguishedName : CN=Some User,OU=My Users,DC=My,DC=Domain,DC=ORG
Enabled : True
GivenName : Some
Name : Some User
ObjectClass : user
ObjectGUID : blahblah-1111-111a-dddd-blahblahblahblah
SamAccountName : suser
SID : S-1-5-blahblahblah
Surname : User
UserPrincipalName : [email protected]

Woah, that’s a lot of info! And guess what, it’s an OBJECT! That means that we can select individual properties or pipeline the whole damn thing. Pretty cool, right? Say that you just want the SAMAccountName for all of the users in your domain. That’s what the select-object cmdlet is for.

get-aduser -filter * | select-object samaccountname

will spit out just the account name for each user we just saw in the previous command.

We can also save a set of objects to a variable for manipulation later. If you run $users = get-aduser -filter * and then type $users, you’ll see the whole output. This is useful when scripting PowerShell.

Let’s get to something useful, ok? What if we wanted to know a bunch of information about every account in the domain, including the last time it was logged in and whether or not it’s enabled. What’s that? You want it in a CSV so that you can poke around at it in Excel? That’s fine.

Get-ADUser -Filter * -Properties name, samaccountname, description, distinguishedname,enabled,lastlogondate |
Select-Object name, samaccountname, description, distinguishedname,enabled,lastlogondate |
Export-CSV c:scriptsusers.csv -NoTypeInformation

That looks a little complicated, but let’s break it down. The first line is basically the first command we ran, but we’ve added -Properties, which retrieves specific properties from the user accounts that aren’t returned by default. The pipe operator | takes the output of one command and spits it into the next – this will be very familiar to anyone that’s worked in Linux. So, we’ve taken all of the user accounts and piped it to a select-object, which we understand from before as well.  Then, we pipe that to export-csv  which basically formats everything nice and neatly into a csv file. There is also an import-csv cmdlet, which can be used as a data source, but we’re not going to get into that today.

If you ever want to see what options are available for a command, or how it handles things being piped to it, there are excellent manpage-style help files available. get-help get-aduser -full will show you all of the options available for the get-aduser cmdlet and will also include some sample commands to point you in the right direction.

The Active Directory module has 76 cmdlets available, so get-aduser is just the tip of the iceberg. get-command -module activedirectory will show you all of them and you can use get-help to see what they do and start playing around.

I hope this helps you get over your fear of PowerShell and get you started with some PowerShell-based reporting! You’ll have your whole account creation process automated in no time! (protip: use new-aduser)

Once upon a time, some dread and fabled tech-berzerkers saw a need for a new online village targeted to sysadmins. Given the blessings of King RedGate they established an outpost and began recruiting other like-minded admins. That community was known as The SysAdmin Network.

Fast forward two years or so, and the outpost has become a small town of very smart individuals who share their trials and success. Their fears and hopes. Dreams and nightmare. It’s not yet a bustling metropolis, but it has potential. There are 912 members that are currently enrolled, with a few more signing up each week.

Built on the Ning platform, there are a lot of inherent features that exist to help the community gel, communicate, and participate. There exists a forum, gallery, chat room, an ability to schedule events, user blogs, videos, and a few more lesser features. One of my fondest memories is of a Solaris / ZFS event done by Isaac Bush (who I also interviewed after he revealed on the SysAdmin Network that he removed all anti-virus from his Windows machines and survived quite nicely). With that I first learned of the wonders of ZFS and began to consider using it for production systems.

Heck, I even relied on The SysAdmin Network as a means of communicating with members of the 2012 RedHat Study Buddy group.

RedGate is reordering its commitments and the SysAdmin Network has come under scrutiny. It’s a fine community, but deserves more love and care than it is currently getting. That’s where the sysadmin community at large can step in.

Friends. Geeks. SysAdmins. Lend me Your Modship.

RedGate is looking to pass off the moderating and managing of the SysAdmin Network onto interested members of the SysAdmin community at large. There’s no hard pre-requisites or rules of engagement. Just show a real interest in the community and participate in its moderation.

The opportunity has been offered to me, but I’m hesitant to accept it alone. I’ve got a lot that I’m working on lately and don’t think I can do a community like that the justice that it deserves. At least, not alone. However with a posse of responsible co-leaders, I think it can be done.

Here’s the question posed to you:

Will you be an administrator for the SysAdmin Network? If you’re interested, of an adult age and truly experienced as a systems administrator, please consider this opportunity. Personally, I’d like to see The SysAdmin Network flourish, but to do so it would take genuine effort to cultivate participation and a healthy, vibrant culture. That’s great, but it needs a small population of motivated individuals to make it happen. It can’t happen with just one person.

Personally, I’m hoping to see at least two other people show enough interest in this to make it happen. There’s no true limit. If just one person speaks up with genuine interest, then you’re likely to be seriously considered. After all, this is RedGate’s baby to pass off, not mine. I’m merely interjecting my own thoughts into the matter. I believe a handful of interested people would be nicer.

To be sure, there is no recompense for this responsibility. It’s just a labor of love. Neither RedGate nor anyone else is offering any kind of renumeration. If you feel like contributing to the sysadmin community in this way, comment below or email me at [email protected] and I can get you in touch with the berserkers at RedGate.

Lastly, I can neither confirm nor deny any allegations that viking helmets help your chances of being granted moderator status.

I started out the 2012 RedHat Study Buddy Group with the best of intentions. I really did. And then, life happens. Or maybe that’s my excuse. The short story, is that my personal development in this endeavor has halted in the last three weeks, thus I have not posted anything about it.

TL;DR

I’m lame

Okay, Maybe There’s More to It

A few rush projects for a client got in the way and I’ve been working nearly around the clock. I’ve been trying to move into some of my own colocation space as well as get the applications and services set up that will track billing and accounts. It’s been a pretty sunup-to-sundown-and-then-some affair, and when it came down to it, getting RedHat certified wasn’t going to net me any paid invoices in the very near future.

So what does this mean for me? It means that I’m going to keep chugging along at a reasonable pace to learn the skills that I need to be more profitable. That means RedHat studying. However, at the moment, it’s looking like the goal of a RedHat certification on my wall before the year ends is going to be unattainable.

So what does that mean for the study buddy group? It means that with a leader that goes AWOL, it’s harder to focus. However, the group remains the same and I hope you’ve all been studying a lot better than I have. Keep it and and stay vigilant – even if I haven’t. =)

What do You Need?

If you’ve signed up to pledge the acquisition of a new RedHat certification before the year ends, let me and the group know what you need to be more successful. Comment below or go to The SysAdmin Network and start a conversation. If you need virtual machines, some conceptual help, real world anecdotes, or whatever, I’m sure someone will be able to give you some of what you need.

We had 11 people sign up for the study buddy group. Where are you all at in your studies? Kenny? Noe? Simon? Jeff? Shashi? Matthew? Adam? Ahmed? Eddy?

We’ve still got time. Truth be told, I could probably buckle down and still make a late December exam. It’s not entirely out of the question. I have to sit down and evaluate what’s important, what is getting me paid, and what would be most beneficial for the coming months.

If you’re still on track, let us all know. If you’re off the tracks, let us know that too. =)

TL;DR

Cobbler is awesome. Go to their Indie Go Go page and consider a donation.

The Long Story

Anyone familiar with the RedHat slice of the Linux pie will have a good chance of having heard of Cobbler. I could go on about it, but I’ll let the project speak for itself:

Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between lots of various commands and applications when rolling out new systems, and, in some cases, changing existing ones. It can help with installation, DNS, DHCP, package updates, power management, configuration management orchestration, and much more.

The Cobbler project is looking to develop further. The following is taken from their Indie GoGo page concerning what their future hopes are:

  • New hardware to add IaaS features to cobbler. Specifically, we are looking to add support for deploying Cobbler-based images to OpenStack and Eucalyptus. As such, we’ll need a few new servers to create a small private cloud for testing.
  • New hardware to build a dedicated test rig (namely Jenkins) to run continuous integration tests.
  • Funds to help pay for the maintenance (power, etc.) for all of the above hardware.

They’re looking to raise $4,000 USD and they have set up an Indie GoGo funding page to raise those funds. It is set up under a flexible funding campaign which means that it will receive all of the funds contributed by Tue Dec 04 at 11:59PM PT.

They’ve got a ways to go before they hit their desired $4,000, so hop on over and consider a contribution if you find the tool useful. Share their Indi Go Go page or this post to drum up further support. I think we can see the project get it’s desired support and then some.

(Today’s post is written by guest blogger Scott Pack!)

I have a bottom of the line box sitting on my desk that I use for some testing. It is a hardware clone of the oldest Snort sensors I have deployed, and by old I mean corporate desktop grade vintage 2003. I have it running and set up so that I can test configuration changes, new rules, software updates, etc. This all makes the system pretty mundane and cookie cutter. I’ll often get a hankering to start from a blank slate, tell this thing to go koan itself, and come back to a clean install. This is all a rather long and drawn out way of saying that I don’t really care about the health of this system and don’t pay very much attention to it.

This morning, in the email containing the output from the auto-update script for this system, I saw gobs of errors of the form:

Error unpacking rpm package 4:perl-5.8.8-38.el5_8.i386
error: unpacking of archive failed on file /usr/bin/a2p;509097e4: cpio: open failed - Read-only file system

If there’s one way to catch my interest, it is to tell me that system partitions are read-only. Some quick research showed me that yes, every single file-system was in fact read only. SMART also was showing errors out the wazoo. Over the years I have learned that SMART’s false-negative rate was astronomical, but its false-positive rate was approaching zero. That is, a “healthy” report from SMART was meaningless, but a “failed” report is completely trustworthy. This was really no big deal, since the system wasn’t used for anything time sensitive, I could just pull a hard drive off the shelf and reprovision.

Since one thing I had used this system for was to do some performance profiling of snort, I had mocked up a couple of analysis and test scripts on it. The scripts themselves were easily rebuildable, but to save me the effort I used the old tar+ssh trick to archive the home directories for myself and root for later extraction:

tar -zc /home/packs /root | ssh packs@node1 ‘cat - > snort-test_homes.tar.gz’

This is where things got hairy. Since the last time I had used ssh to go to node1 it had also been rebuilt, resulting in a host key change. With StrictHostKeyChecking enabled ssh refuses to connect if there is a host key mismatch. Ordinarily, I would just delete the key from the known_hosts file and move on. With the file-system being read only….

I worked around this by changing my known_hosts file on the command line. Since all the file-systems were read-only I couldn’t actually write any files, nor did I need to save the information. This left me with the perfect choice of /dev/null. Adding in this option made my final command look something like this:

tar -zc /home/packs /root | ssh packs@node1 -o UserKnownHostsFile=/dev/null 'cat - > snort-test_homes.tar.gz'

I liked this because it worked and was easy. I don’t like it because it felt skeezy.

As many of you know, I own my own IT consulting company. It’s small, and so far is constituted of just myself as well as some trusted colleagues who I can subcontract to as needed. (As an aside, I’m also moving into the hosted / managed services realm so if you need some colocation space in a Phoenix Arizona datacenter as well as managed services for those systems… hollah! =) )

Lately one of my clients has relied on me to be the main point of contact for their own customers. Without going into too much detail, and trying to keep it very high level, my client provides an internet based service to home users. When that services can’t be used as a result of some misconfiguration of the home user’s PC or equipment, I call them and make it right. Here’s where things get frustrating.

When Cute Goes Too Far

Everyone loves a unique domain name. It doesn’t even have to make sense. Witness the popularity of Cover Duck. They make spa covers. What does that have to do with a duck? Nothing. But it’s a catchy single syllable word that is associated with a funny animal. I’ve seen numerous articles on the genius of that domain / business name.

A more recent example is Daddy Cheese. Yes, that link is safe for work. I was teasing a friend about how we should start a dedicated mine craft server hosting company (It’ll be awesome, Joel!!). I searched around for some similar hosts and found a list of the top Minecraft hosting companies. Daddy Cheese is one of the best, apparently.

Aside from being vaguely perverse and making me feel uncomfortable, it’s darned memorable. It has absolutely nothing to do with minecraft as far as I can tell. At least Cover Duck gets the word “cover” in there. However, the term is memorable. The syllables are few, the words are common and the spelling is reliable. (As an aside, if you have a word in your company / domain name that can be spelled a few different ways, don’t use it.)

Furthering the cuteness, you can use obscure TLDs for things like delicio.us and imag.ly. Spiffy, no? Here’s where it gets ugly…

Stop and Watch This Video


A Good Name is More Desirable than Great Riches

Since I do a lot of remote support for this client, and others, I rely on screen sharing utilities. I’ve worked with quite a few, and I’m considering my options, but for now I’ve settled on the free tool from LogMeIn called Join.me. For a free tool, it works well and I can access most client computers, both Mac and Windows. It has a few irritating hiccups that I’ve discovered and it isn’t terribly fast, but I can get by with it until I invest in an enterprise tool.

However, I have a terrible time getting people to understand the URL.

Okay, I’ll have you bring up your web browser; whatever you use to look at web pages with. When you’re ready, I’ll have you go to a website for me. Type this URL in: double-u, double-u, double-u, dot ‘join’ dot ‘me’ – right. Dot me. No… just dot me. There’s no dot com or dot net. Just join dot me. No – join as in “join two things together”. Oh, you see an error that the website cannot be found? What did you type in as the URL? Uh-huh, there’s no .com. No… it’s not join dot me dot com. Just jay-oh-eye-in dot em-eee. No, no dot com. Just double-u, double-u, double-u dot join dot me. Dot em ee. Em as in Mary, Ee as in Echo.

To LogMeIn’s credit, I recently discovered that they own the domain joinme.com as well, so that will make things easier. However, it would have been nice if that domain was a bit more well publicized and it suggests that the marketing force behind the main website does not have much experience with technical support and if they had any cries of alarm concerning the join.me domain name, they ignored them.

What to Take Away

Join.me is an okay remote support tool. The end users that I have to support are great people who don’t typically have a lot of computer smarts, and that’s fine. However, if there’s one thing to learn in all of this it’s that functional wins out over cute any day of the week.

Cover Duck and Daddy Cheese have Functionally Cute names (okay, Daddy Cheese isn’t cute — it’s disgusting. But hey, it’s memorable). However, there’s no ambiguity. Imagine if an obedience training school named themselves FauxPaws.com. Every time someone gave that domain out, they’d have to spell it. As a result of the confusion, people will forget it or just give up.

I see that with my own company’s name. I chose a name that has a four letter word followed by the words “hosted services.” I came to find that the four letter word (it’s not a bad four letter word of course =) ) is hard to make out over the phone. So I’ve taken to saying the first word, then immediately spelling it, then continuing with the full name and then saying the full three word name all over again.

What are some of your worst stories with constantly misunderstood names? I know you’ve got some good ones. Let me know in the comments below along with any insights you learned from the experience.

When using a *nix variant of one kind or another, you can press a certain key combination and be transported to another land. A land of magic. Or at least a fresh shell. If you’re using a GUI DE then pressing this certain key combination will take you from the land of pretty colors to a land of monochrome… unless you’ve gone all crazy with your bashrc file, but that’s for another post.

What is this strange key combination? It depends on your distribution, but it’s going to most likely be Alt-F1 (through F7) or Control-Alt-F1 (through F7). Before you go pounding that key combination, know that your desktop environment is likely on the F1 or F7 terminal. If you get whisked away from your sparkly world of windows and menus, but you want to come back, just press Control-Alt-F1/F7.

The concept is known as “Virtual Terminals” or “Virtual Consoles” and I suggest that if you’re not familiar with it, that you read up on it. Especially if you’re considering getting your RHCSA like some friends and I are. Oh, and make sure to check out the Stack Exchange thread of epic proportions titled “What is the exact difference between a ‘terminal’, a ‘shell’, a ‘tty’ and a ‘console’?” as well as the nearly-as-edifying Q/A “Why is a virtual terminal “virtual”, and what/why/where is the “real” terminal?

Anyway, I use virtual terminals to have multiple commands running simultaneously but not in the background of a single shell (e.g. running top on virtual console 2 while I go back to frobbing on console 1). I find it easier to check the progress of a running program when it’s in a virtual console rather than as a background job in a single shell.

In the course of using virtual consoles, sometimes I want to know exactly where I am. Am I on vterminal 4? 6? How would one find that information out?

The answer is with the command tty. In fact, if you read the two Q/A session I linked to above, you’ll know exactly why the command is called `tty`. When you run that command, you’ll be told which terminal you’re on.

[user@computer ~]$ tty
/dev/tty3

Now we know exactly where we are! You can also do some stupid tricks with virtual terminals since they are represented by device files in the Linux filesystem. For example, you can send output from one tty to another. echo woot > /dev/tty3 would send the text “woot” onto the console located at tty3 (to be accessed with Alt-F3). I’m sure there’s a useful application for that ability, but I haven’t found it yet.

It should also be noted that if you want to see how many virtual terminals currently have someone logged in, the who command can help you.

[user@computer ~]$ who
wesley     tty3     2012-10-13 12:13
snipes     tty2     2012-10-13 12:13
crusher    tty4     2012-10-13 12:14
borland    tty1     2012-10-13 12:10 (:0)

Have any handy bits of wisdom for the use of virtual consoles? Want to share? Share with the class in the comments section.

 

Follow TheNubbyAdmin!

follow us in feedly

Raw RSS Feed:

Contact Me!

Want to hire me as a consultant? Have a job you think I might be interested in? Drop me a line:

Contact Me!

Subscribe via Email

Your email address is handled by Google FeedBurner and never spammed!

The Nubby Archives

Subscribe To Me on YouTube

Circle Me on Google+!

Photos from Flickr

Me on StackExchange: