Today I’m at the VMUG user conference in the greater Phoenix Arizona area. It’s actually held on the west side of Phoenix in the city of Glendale. For more information on the conference, check out the official agenda page.
I’ll have my choice of four different talks in five separate breakout sessions throughout the day. Here’s how I’ll fill them up:
Breakout Session 1: The first breakout session starts at 10:30 and I’m probably going to take the one titled “vCenter Operations – What’s New, What’s Cool?”
Breakout Session 2: I’m going to take the session titled “Veeam A look under the hood: Veeam Backup & Replication”
Breakout Session 3: I’m torn between two sessions. I’m either going to the “Nexsan A Case Study in Simplifying Management & Reducing Expense by 50%” session or “Xsigo Under the Hood with Virtual I/O Technology, and How VMware Uses It to Do More” I don’t grok virtual storage (beyond simple LVM concepts) or what problems it solves so the latter would probably be more informative.
Breakout Session 4: Again, it’s a toss up between two sessions: “Teradici Corporation An inside look into the PCoIP® protocol and zero clients” and “DataCore Architecting Your Storage Infrastructure to Yield Virtualization Objectives” I’m starting to prefer the storage sessions though. I mean, really… am I going to be implementing a PCoIP system anytime soon? Likely not.
Breakout Session 5: “HA 5.0 Deep Dive” is the probable winner, but “Performance Best Practices for vSphere 5” is a slight possibility. Who am I kidding? After destroying HA myths just recently I’m ready for a rumble.
I’m tweeting on the #AZVMUG hashtag and will likely include some pictures. I may or may not live stream on this Justin.tv channel. I’ve scheduled this post to auto publish about an hour or so before the live blogging will actually begin.
Registration is between 8:00 and 8:30AM and I’m currently at
-8UTC (the other half of the year I’m in -7UTC, but that’s Arizona’s fault, not mine). You should hear from me sometime between 8:00 and 9:00AM. EDIT: I totally messed up the time zone information. Arizona is always -7UTC. We don’t change for daylight savings. So right now, I’m -8DST, AKA Pacific Time.
I arrived early and was able to check out the vendor area as people scrambled to set up their booths. A fruit-n-muffins breakfast is available including some pretty darn good coffee. Sadly, the only free wifi that is available is the hotel’s lobby wireless network which I’m having some trouble with. I’ll only be able to update the blog between sessions. Pardon any spelling mistakes and choppy writing. Pretty standard live blogging disclaimers.
Thankfully it’s not a massive conference, so it looks like there’s no mascots prowling around or booth babes being exploited (the latter of which should please Matt Simmons).
It’s a very informal get-together in spite of the size of the meeting. I was intimidated by the size of the venue and the amount of vendors, but it seems very close knit and “homey” to me. They had a few minutes of “housekeeping” that made it very apparent that it’s a friendly group. My last VMWare event was a rather large one that was corporate sponsored. It was loud, lots of music, flashing lights and seemed like it was trying to hard to get people excited. It’s computer virtualization, not a Linkin Park concert.
I learned that VMUG is a completely indie organization. It used to fall under VMWare. For about a year now it’s been its own entity. That was all news to me since I’m not intimately close to VMWare.
There’s apparently some kind of hands on lab available here that’s new to the VMUG. It’s sponsored by EMC. I hope to get my hands on it. Apparently done at VMWorld last week to great acclaim.
There’s maybe only 150 people in the main hall. Seems pretty sparse for the size of the room that was rented out. I was expecting it to be packed. That’s nice to see – better chances for me to win raffles!! I want to have to rent a cube truck to make it back home with prizes.
Tony Welsh, VMWare Systems Engineer takes the stage.
He’s going to talk about the lab environment. 148,000 VMs were created at VMWorld in the lab environment over 4 days. All the datacenter equipment was offsite. 480 lab stations were on site and going to a vSwitch environment. There were three datacenters used to create the environment on the backend. Terremark in Florida, one in Europe and one right there in Las Vegas.
Talking about vSphere 5 / ESXi 5.0. Autodeploy is baked in now, so no more custom scripts. The storage engine is profile driven, so you can simply deploy a set of VMs and tell it what kind of performance you want from the storage system and it will auto deploy to your storage backend appropriately. The 2TB limit has been lifted. HA has been rewritten. Split clusters are finally true, rather than being unrecoverable if storage went offline.
View 5. PCoIP is enhanced reduced bandwidth requirements. Upwards of 75% reduction. 3D graphics driver is available.
Blah blah, missed a lot of stuff about Android and iPhone apps as I typed. *sad face* Why am I checking ServerFault on my iPhone? Oh yeah, I’m trying to solve a problem to get a 200 point bounty. Forget this VMWare stuff, I need arbitrary points that mean nothing in the real world!!
AppBlast is a product that can bring up remote applications in “any” browser. Air quotes are mine. Apparently it worked to bring up Excel in an iPad. I now have crazy dreams of my own VMWare application server in my closet so that I can get certain applications to run on my iPhone.
Tony leaves the stage. Michael Krutikov from Symantec in the BackupExec product team takes the stage. Wearing a very dark suit, emaculately pressed. Looks like either an FBI agent or a preacher.
Right away he positions BackupExec as the #1 backup solution hands down for VMWare. Takes a swipe at “niche solutions”. Considering that Veeam is in the house, that’s… awkward. IBM is actually #2. EMC a very close #3. CommVault, CA and HP straggle in after that. Seriously? CA? *gags* He says that half of the world’s data is backed up by Symantec.
He says that BackupExec and NetBackup are 95% identical so it really doesn’t matter which one you use most of the time. So… why have two product lines? *shrug* I’m no VM backup expert, so I hope there’s some valid difference. But still, seems silly. Ahh, later he says that NetBackup does better global deduplication.
Michael is leading the group with questions – good questions – but no one is raising their hand or really interacting. Tough crowd. “How many people restore a full VMDK when you need to restore just a few files from a backup?” No one responds. C’mon people, you know you do that more than you’re willing to admit.
NetBackup uses vStorage APIs. No agent in the virtual machine. Nice (not revolutionary by any means, of course). Only need one backup to be able to recover files from a snapshot. Not a VMDK base and then snapshots on top of it. Restores are quicker.
V-Ray is what they call their snazzy backup technology. I’m hearing the word “patented” a lot. Global deduplication across VM Guests, ESX servers, virtual, physical and NDMP sets. That seems cool.
It took 15 minutes before I heard the term “Single pane of glass!” I was expecting it sooner. Mad props.
There’s a basic deduplication lesson. Target, source, blah blah. You all know about that.
Apparently the agent for backup installed on a media server accelerates backups enough to where clients buy the software just to use the agent to then spool the data onto a deduplication appliance that itself is a backup device. Something like that, anyway. I was a little foggy on that whole discussion.
There is a tab within vCenter / vSphere for Symantec backups, so no separate console is needed to look at your backups.
Symantec has backup exec appliances now that were announced at VMWorld last week. I wasn’t aware of that. They’re a big hit. The appliances do look nice. They have agents and etc. already in it. It would be good for remote sites. I’m interested enough to look into it for the future. Symantec will be offering its own cloud backup service so you can go Disk to Disk to Cloud if you don’t want your own DR site. Clouds. Huzzah.
Closed laptop. Gotta go find Wifi to past this into the blog.
Charging laptop and got close enough to the lobby to be able to post this. Surprised that such a swanky hotel doesn’t have full, free wifi coverage. Someone remind me to order a new battery for my aging XPS 1530. Great laptop, probably has another 3 years in it. The battery, however, does not. Even so, Fedora 14 is good to it. If I remember to turn off my wireless adapter I might be able to eek out 2 hours from it. Quite a difference from the 4 or so hours it used to get.
Is there anyone reading this that’s at the event?
Presentation over. It was just announced that the labs are available!
Hotel lobby wireless is flaky. Can’t get online.
I didn’t know about the labs so I’m going to see if I can grab a spot and do that instead of the breakout session. There’s about 15 workstations that are available to sit at.
The lab was packed. I decided to go to the Veeam talk it’s. All about backup and replication.
The presneter states that images aren’t enough. Applications within the VMs are often not consistent when restores are performed. The next way some places backup virtual machines is perform a virtual machine image (backup the VHD) and then have an agent on the inside.
A feature called instant restore: They wrote their own NFS service that “rehydrates” the backed up VM on the fly so you can have a downed VM back into production in seconds as it starts to be restored. The NFS Veeam datastore is a VMWare datastore within ESXi. The NFS service is just like a proxy. It’s going through the NFS service to the ESX host. You can use whatever storage you want. They don’t use a NFS datastore. They use any volume, and then publish it with the NFS service. Restores like this, of course, don’t edit the original backup file. The changes to the VM are captured and the backup is never touched.
I just heard the word “agnostic.” “Rehydrate” and “agnostic” within a few minutes of eachother. Oh yeah, this is a sales demonstration. The sad thing? I use those words too sometimes. D=
But what if your live servers are on a SAN, and the backup storage is a slower NAS? How do you move the VM to the faster storage? It relies on storage vMotion to move from your NFS server to your SAN if you want. You could do a cold migration as well if you’re not licensed for vMotion.
Next feature. “SureBackup” After backups are done, backups are loaded using InstantRestore and it then tests things like pinging network adapters, makes sure that everything is running, can run custom scripts to make sure that applications inside the VM are running. You could run scripts to make sure that Exchange is running right or SQL Server or whatever. There is a virtual lab which is a vSwitch with no network adapters that the VMs are loaded into and tested. That feature sounds awesome.
Next feature: On demand sandbox. Allows you to power them up for QA / patch testing / etc. It’s a lot like SureBackup, but used in a less automated way. There are dependency groups, so if you want to test Exchange, it needs a DC of course, so if you make a dependency group including the DC, you’ll automatically have the DC when you spin up Exchange to test on. Application aware restorations means that the DC will restore in AD restore mode so that you’re not replicating bad data. Application restores work in SQL Server, Exchange and even Oracle. Look into U-AIR for more info on that.
Nice thing about Veeam is it’s agentless so they don’t charge per agent. They don’t care how much data your backing up. It’s just feature based pricing. Standard and Enterprise.
Application backups are done without agents, which is interesting. Not sure how that’s done. There was a demo about a change being made in AD on a production DC, and then the Veeam backup server was looked at for the DC backup. The single LDAP property that was changed was restored, all without rebooting the server or causing disruptions to the DC.
The latest version now separates the roles out, so you have backup servers (basically the schedulers), proxy servers for dedupe CPU and repository servers as storage points. Apparently before this latest version was released each backup servers had each of the three roles on it.
Just finished eating. I’ve covered half of the vendor floor. I’ve actually found some interest products I’ve never heard of before. VirtualWisdom’s SAN monitoring solutions and Quest Software’s VM management products.
I’m ignoring the Lunch Keynote by Arista networks which goes from 12:15 to 1:30. I think I’ll go to the Xsigo breakout at 1:45, but take the opportunity to tinker in the VMWare / EMC lab instead of a breakout session at slot #4. The fourth session was the least pertinent to me anyway.
With how many times my badge has been scanned, I’m expecting a flood of sales calls and emails. I just realized, I didn’t use my obfuscated email address. I have an email address that I use for vendors so my main company account doesn’t get hammered. I fail.
Time to go troll the vendors. Once more into the breach!
After a rejeuvinating lunch (fully catered Mexican style food that was amazing), I walked into a talk by virtual storage company Xsigo. Pronounced “SEA-go”.
Xsigo provides a large appliance known as an I/O director. It gets all adapters out of servers, puts them in one large box and aggregates bandwidth into 1.5TB total. You carve up your datacenter into as many vNICs or vHBAs.
You have blade servers consolidating your servers. VMWare consolidates your OSs. Your I/O infrastructure is consolidated by Xsigo. They use InfiniBand, a 40Gb connection between each servers and the virtual I/O. InfiniBand cards in your servers connect to the Xsigo director. The director is physically divided into two parts. The top half has what looks like 24 Infiniband 40Gb ports. The lower half has line cards for 1Gb / 10Gb Ethernet and 4Gb / 8Gb Fibre Channel. You then provision virtual NICs for FC or Ethernet to your servers.
The savings on fabric for servers is substantial. Instead of tons of fabric in each blade housing, you have a few Infiniband fabric cards and then use the Xsigo to provision bandwidth.
Using Xsigo’s management console, you can manage your physical servers and you can set your peak and comitted rates for each vNIC so you manage your congestion tolerance. You can place servers in groups and provision network interfaces for them all at once.
There is a tab that can be installed in vCenter so you can have a view into your Xsigo network from within vCenter.
I’m not 100% sure how much visibility there is to manage the bandwidth to see what and where your bandwidth is going. The management console looks okay, but I wasn’t clear on how I could see multiple Xsigo boxes’ performance metrics. There is a performance monitor built into the product to view quite a few different metrics. The largest deployment they have is 4 sold to Disney Interactive. Apparently it saved Disney about 2 million dollars in HBA and cabling costs.
He mentions that they’re working on getting more into better insight into what is the IO paths so you can know better how to provision your network cards.
I was planning on doing a VMWare lab, but whem I went by the lab room, most of the workstations were broken down and only four or five were left. I figured that it would end up being a 1:1 sales pitch so I decided to go to a breakout session instead. I took a surprise course. I usually avoid purely cloud themed sessions, however the private cloud session put on by EMC / VMWare intrigued me. I’m somewhat interested in an “internal cloud” for some ideas I have. I just wanted to see what this was all about.
Get ready for a definition of the term cloud! “It’s your datacenter, virtualized. Not just your servers, but the whole datacenter. Networks. Storage. CPU. Everything.” That’s actually a fair definition. The idea is to pool resources, CPU, Network, Storage, etc. We all know this, but sometimes it’s hard to pull out of the older view of thinking about virtualization as purely “I make one physical server hold many virtual servers.”
An interesting idea is that the concept of “chargeback” not ever being used within a company, however you can use those numbers to becoming a “costback” to show how much a project is costing the IT infrastructure but also how IT might be saving the company. I thought that was interesting since very few companies actually use chargeback internally. I always wondered if those metrics could be used in a different way though.
The emphasis is that it’s not just about virtual servers, it’s a virtual datacenter. The first quarter of the talk was pretty much as nebulous as one could expect.
Next quarter was some interesting high level views into how EMC and VMWare integrate and how each uses the others APIs so that you can see which VM is using which LUN, etc. I was impressed with that part. It doesn’t seem to just be marketing spiel. It seems legit that EMC and VMWare really interplay with eachother in good ways
It was all over my head though. I’m not a storage admin nor do I use a lot of VMWare in the context of this breakout session. I was impressed with the sub-LUN tiering of storage. Data within a LUN can be spread across multiple tiers of storage based on what parts are used more. That seemed pretty smart.
Mention was made of vBlocks. I’ve liked the idea of vBlocks for a little while now, but like it even more now. I hope to get to use it someday.
HA 5.0 has been rewritten and is changed much from 4.1. There is no longer a primary and secondary server. There’s no dependency on DNS (not sure how that was elaborated on). While HA supports IPv6, some other products do not, like the VC appliance. You can use IPv6 for HA but you won’t be able to manage it through the VC appliance.
Multiple hosts can act as a failover host. The limit to the number of hosts failures has been raised to 31.
The HA module that is installed on the host (the agent) is referred to as the Fault Domain Manager (FDM). There is now a master or slave. There is no longer a primary or secondary. That helps in blade environments where you used to have to keep very close track of primary hosts.
The master is the central point of communication. It monitors hosts and virtual machines and reports all of that up to vCenter. The slaves monitor the VMs running on them alone. Slaves forwards any type of state change, like a power on of a VM or a crash, to the master. It will perform actions that the master dictates, for example restarting a particular VM. It also monitors application within the VM if that is turned on.
HA also has application level components. You can protect applications within the virtual machines such as Exchange. I’d love to write more, however my battery is drained down to the last few minutes. I’ll have to finish this up when I get out of the session and near a power outlet.
Later that night:
I sat through the HA breakout session and it was by far the most technical of the talks. So much so that I zoned out through most of it since I am not familiar with VMWare HA. I needed a base of understanding that I did not have. All in all, it looks like it has had quite an overhaul from previous versions with a lot of special case intelligence to make sure you don’t end up with
Most of the talk was on the logic behind failover and master election and how bad network design won’t necessarily cause problems in fringe-case failures. Application aware HA wasn’t gone into as deep as I would have liked since we went over time.
It was a good day! The fact that it is a user group and not a corporate event made it very friendly. A ton of iPads were raffled off (none of which I won) as well as a few other goodies.
So how was it? Was the live blog actually useful for you or a little blah? Are there any conferences in the southwest coming up that you know of? Let me know in the comments below.