Wednesday, November 30, 2005

Workstation 5.5 out the door

Well, I can stop directing VT experimenters at the Workstation 5.5 beta, now that Workstation 5.5 has shipped. link link.

Monday, November 21, 2005

ZFS emerges from the vapor

The last few stragglers of Solaris 10 are finally making their way out the door, and look none the worse for their lateness. ZFS seems extremely promising from an administrability point of view; getting rid of LVMs, if it accomplished nothing else, would be a huge usability win, and greatly increase the appeal of pooled storage systems. Clicking around the links Bryan has posted above, I saw some promising things, but a few things struck me as odd.

For instance, Bill Moore shares an anecdote about a large performance win over UFS. The artificial benchmark in question flooded the system's block I/O layer with write requests, so the handfuls of reads lying around to, e.g., page in the root shell so the sysadmin can figure out what the hell just happened take seconds and minutes to complete, and the system is generally ground to dust. How does "ZFS" improve on this? Well, it observes that writes are bufferable, while reads are blocking, and so prioritizes reads over writes. It is therefore much more able to survive storms of write requests. Nice.

So, why am I putting "ZFS" in quotes above? Because it seems to me that any block I/O layer could perform the same optimization. Why is this, as Bryan claims, "a consequence of the end-to-end principle"? Couldn't I do the same thing with extfs in Minix?

Another entry that my virtualization-oriented bias leads me to see differently was some of Bart Smaalders' musings on the consequences of ZFS for the future of Solaris. One of the pieces of extremely clever engineering in ZFS surrounds its treatment of snapshots; they're easy to make, lightweight, first-class, not necessarily read-only, etc., so Bart envisions all sorts of consequences for system upgrades, partitioning, etc. What's interesting to me, as a virtualization dork, is that virtualization (v12n?) makes all of this possible for any file system, under any operating system, with a minuscule fraction of the five-year engineering effort required to create a marvel like ZFS. So, if you find yourself fantasizing about using these snaphost features of ZFS under Windows, consider running that Windows machine in a VMware virtual machine.

But, while it is perhaps lightly overhyped, I truly think ZFS rocks, and wish I could use it on my host right now. Maybe if Janus can follow ZFS out of the vapory mists, I'll make that threat a reality...

VT Coverage: Predictable and Complete Confusion.

Tragically, this article has been typical of the coverage of Intel's VT release. Put-near all of these articles take the reader on the same wild ride of falsehood and sheer conjecture:
  1. A new era of virtualization has dawned! In short, no. Virtualization is exciting and important, but the era dawned in 1998, and it's now some time in the era's early afternoon. Every single one of the features that these articles describes as gauzy science fiction that Intel's boffins are struggling to bring to life is available right now, today, a phone call away, in a fully supported commercial offering from VMware. VT is nothing but an alternative technique for some of the very low-level parts of VMware's software. It makes possible exactly nothing that was not possible before.
  2. Err. Ok. But, like, hardware is fast and software is slow, so a new era of performant virtualization has dawned! Or something! As episodes like the VAX Call instruction illustrate, attempts to solve complex problems entirely with one, super-general purpose chunk of hardware are not always performance wins; software's flexibility, intelligence, and adaptability often means that it can exploit opportunities that hardware, whose development pipeline is almost an order of magnitude longer than that of software, cannot. Whether VT falls into this category remains to be seen, and we have to cut early implementations some slack, since this is, after all, first generation hardware and software, whereas VMware has been tuning its virtual machine monitor for seven years. But, simply assuming that "hardware is fast" can be ... misleading.
  3. Well. At least a new era of, maybe, more correct virtualization? Or, more simple virtualization? Throw us a bone, here. Maybe, maybe, maybe. But it will take some time to see whether these claims materialize or not.

Flame off. Deep breaths. Now seriously: is it really too much to ask that supposedly technical publications obtain some available hardware, and run some of the available software before breathlessly copying and pasting the Intel press release? Would any other piece of hardware get comparable coverage without the author ever having seen a single physical manifestation of the artifact, let alone run a real application on top of it? Can you imagine a 3D card review like this?
NVidia's new gForce 68802xLxXx has ushered in a shining new era of accelerated gaming. Imagine blowing some stuff up in Doom 7, and it looking really, really, REALLY REAL!!!! And fast. We're hoping to get our hands on one in Q106. Hopefully someone will have written a driver by then.

Monday, November 14, 2005

VT hits the streets

Intel's Virtualization Technology (VT) has hit the streets. For those just tuning in, VT, like AMD's competing initiative SVM, nee Pacifica, provides hardware support for CPU virtualization in the x86. I've spent a good deal of the last year working on supporting VT in VMware's VMM; this work is currently available in the public beta of VMware Workstation 5.5, so, if you have a really, really new Intel CPU and are curious about how this VT stuff works, please give our code (and Intel's new hardware) a try.

What does this all mean for VMware? Opinions vary, of course. When VT and Pacifica were first announced, there was a lot of knee-jerk slashdot triumphalism of the form, "Ha! We don't need VMware anymore because it will all be in hardware!!!" Of course, there's a lot more to VMware's software than just multiplexing CPUs. There's memory, a chipset, peripherals, undoable disks, virtual networks hooked up in complicated topologies with configurable bitrates and lossiness, and all sorts of other stuff that's hard to imagine doing in hardware.

It is true that Pacifica and VT sidestep the classical impossibility result about x86 virtualization. On the one hand, if VMware hadn't figured out how to square that circle in 1998, I don't think we'd be where we are as a company today. But, on the other hand, we're far past the point where VMware lives and dies by a single systems programming parlor trick. Raw technology solutions for multiplexing an x86 CPU are already freely available. See, for example, QEMU. But, as cool as QEMU is, it doesn't really directly compete with VMware Workstation, because, e.g., you can't sync your ipod with a multiprocessor windows guest running inside of QEMU.

In 1998, VMware Workstation 1.0 was a singing dog: the miracle wasn't that it sang well, but that it sang at all. However, in the intervening seven years, our software has become more than a curiosity. Stretching the analogy far past where wisdom suggests, we've taught that dog to be a colorful, original interpreter of Western music's canon, and we expect that dog to soon shock the world by crafting original, poignant compositions. Who cares if the landscape is cluttered with other mutts practicing their scales?

Wednesday, November 02, 2005

SGI Prepares to Shuffle Off...



I interned at SGI, then "Silicon Graphics, Inc.", during a wonderful summer of 1998. IRIX 6.5 was shipping. Linux was still a joke.

I met some amazing engineers, whose influence has continued to guide me. I learned a lot about UNIX internals, big NUMA systems, and how software really works in the process of trying to prove myself to Jeff Heller's real-time/pthreads group. I also began dating the lovely woman who is now Mrs. Adams that summer. So, I'll be spilling a bit of this coming beerbash's NorCal yuppie ale du jour in memoriam. Good bye, SGI. Thanks for all the cool cases.

Tuesday, November 01, 2005

Xen and RedHat

These days, when fellow techy sorts find out I work for VMware, they often want to know what I think about Xen. With yesterday's RedHat PR event containing a protracted mash note to Xen, and Slashdot boiling over with the usual speculation, it seems particularly topical today to answer this FAQ. I'd like to especially, super-duper emphasize that this is just me babbling, and has nothing to do with anything officially believed, encouraged, advoctated, etc., by my employer.

For those just tuning in, Xen technically differentiates itself from VMware by the somewhat lugubriously named technique of "paravirtualization." This term refers to co-designing a guest OS along with the virtual machine monitor, to optimize the fit between them. Obviously, this isn't always possible. If you intend to run an OS whose source code is inaccessible, or perhaps doesn't even exist in electronically readable form anymore, paravirtualization is not an option. Paravirtualization constrasts with the "black-box" virtualization practiced by classical VMMs, wherein the OS is carefully kept unaware that the VMM exists, and the VMM in turn has little semantic knowledge about the OS.

Having worked at VMware for a while when I first heard the term, paravirtualization initially struck me as a bit of a hack; i.e., it looks like a way to get some of the advantages of virtual machines, without having to solve some of the ludicrous problems that the x86 presents for classical VMMs. However, after talking to the L4 team a couple years ago, I have been won over, with some qualifications, to the view that paravirtualization can be a legitimate result of conscious system design. In some ways, paravirtualization shows the way towards a unifying scheme for CPU architectures, OS interfaces, "artificial" virtual machines like Java, etc. After all, what is a UNIX process if not a "virtual machine" with some convenient extra capabilities for writing performant applications?

We can classify any software system that wraps underlying hardware along a continuum of levels of abstraction. A purist black-box VMM, which exports a software interface identical to underlying hardware, ala the golden-age IBM VM/360 systems, sits at the lowest level of edge of the universe, while something like a JVM, which exposes no details at all of the underlying hardware, sits at the top.


(Trivia: Note that VMware's ESX Server doesn't quite sit as far near the bottom as the VM/360 machines, because some aspects of the underlying hardware are mutated by the VMware virtualization layer. For instance, the vast diversity of hardware on current PCs are normalized to a basis set of hardware we're comfortable emulating. When easy opportunities to "lightly paravirtualize" the guest, e.g., by using an abstraction of a video or SCSI card, rather than a real hardware model, have arisen, VMware has taken those opportunities. Still, the interface exported by ESX Server is about as close as is practical to that exposed by bare metal.)

Populating the middle ground between JVMs and bare metal, we find a bunch of familiar system-construction paradigms: traditional OS'es, which provide a virtual machine that has high-level semantics (such as files, processes, threads, etc.), but is still machine-language programmable; and microkernels, which provide a slightly less high-level abstraction out of the box. A paravirtualized VMM is simply a different point on this continuum, somewhere between the bare-metal VMM provided by something like VMware ESX Server, and the higher-level (though still pretty low-level) interface provided by an exokernel.

So, that's Xen in a nutshell. It sits at a different space in the continuum of system design than current VMware products do. It offers some of the advantages of VMware products (unmodified application binaries) without offering others (unmodified system-level binaries). The Xen guys are fond of implying that they'll get around to offering those other advantages, perhaps with some coy references to VT and SVM. For various reasons, I think the technical obstacles to achieving parity with VMware's offerings are greater than they realize. Then again, maybe they've started to realize it in the process of slipping Xen 3.0 from August to December. (Not to be too smug; we've slipped releases, too. So has everybody.)

On a personal note, I've had the pleasure of hanging out with various Xen movers and shakers. They're smart people interested in solving problems, and, fairly enough, would like to make a few pounds sterling doing it. They can also really hold their liquor. So, I wish them luck. I've got no problem at all with a little competition. It's good for our customers, good for virtualization as a whole, and keeps my job more interesting. So, bring it on, Xennies! I dare you to make Xen 3.0 as good as you know how.