Total Pageviews

Thursday, November 3, 2011

Open Virtualisation (oVirt) Community Launch underway

For those with a deep interest in open virtualisation solutions you will have probably noticed that the oVirt project is having (or has had depending on when you read this) its community launch.

I've been waiting for this for quite some time and it's finally here.  It's a pity I couldn't attend as I'm really keen to contribute.

So what is the project and community about?

Paraphrasing from material released during the workshop, the project will deliver a complete & cohesive virtualisation platform including hypervisor, engine, API and GUI. It will allow all users to easily deploy and manage KVM based virtual machines.

The project gets its bootstrap  from the opensourcing by Red Hat of its virtualisation management assets with the intention of creating a vibrant upstream community

It builds on existing open source code such as



  • KVM  - hypervisor
  • oVirt Node - small image designed to run virtual machines
  • virt tools - Management API for KVM (& other hypervisors)


and introduces new community projects


  • oVirt: Engine - back end virtualisation management server
  • oVirt: VDSM - Node Management API used by oVirt engine or other applications to manage the hypervisor.
  • oVirt: Web Admin - HTML / AJAX based GUI for managing oVirt Platform
  • oVirt: User Portal- HTML based user portal for non-admin use cases
  • oVirt: API - open API for management and integration with other tools
  • oVirt: Reports - Reporting & Dashboards based on Jasper Reports

If you're not excited by that, I'm sorry, I am excited :-)

Tuesday, September 6, 2011

KVM on Illumos

Illumos for those who don't know is the community fork of opensolaris.  Openindiana is a robust operating system distribution based on illumos.   Now, that we've got the introductions out of the way what's the deal with KVM?

Engineers at Joyent have added what appears to be one of the most significant additions to the opensolaris platform since the fork - a significant effort, the porting of the Linux KVM capability to the illumos kernel.

It's a great achievement, but it's not a complete port at the moment.  The focus is currently only on systems with VT-x extensions, AMD SVM isn't on the current roadmap, but during the port no design choices were made that excludes that possibility.   From a performance perspective, the port is on par with Linux KVM with some of the simpler, more abstract benchmarks.  No formal testing, such as SpecVirt has been performed as yet to get a more realistic understanding of the performance.

Notably, and deliberately, there is no guest memory overcommit, no KSM providing memory de-duplication and no nested virtualistion.  The latter being a feature I really enjoy in my test lab.

It's not all about features being missing though, the Illumos KVM port has added CPU performance counter virtualisation and implements all KVM timers using the cyclic subsystem.    Of course, Illumos has ZFS as the underlying operating system and for a hypervisor platform there are some great features in ZFS.  

From a security perspective, the QEMU guest is run inside a local zone, this gives further isolation of the guest from the underlying system as well as providing resource management, i/o throttling, billing and instrumentation hooks etc.   This is a nice approach.  Linux tackles this in a different way with selinux and cgroups for example.   Further exploitation of the container capability occurs with network virtualisation where a vnic is created for each KVM guest which inherits the vnic capability from the container - anti-spoofing, resource management.  Further enhancements were made in the area of kernel stats and of course DTRACE.

This is a great addition to the KVM community, it's a different approach and i'm certain the injection of any fresh ideas and concepts is only going to enrich KVM capability.

Sunday, September 4, 2011

Message based frameworks and 'Cloud Computing'

Several years ago I thought to myself that for many if not all enterprise environments a message architecture or framework was the ideal way of performing enterprise wide administration of heterogeneous server environments in a standardised manner.   Now of course we have things like ESB.  The problem is that those who in general look after server farms don't really understand the power of ESB and maintaining an ESB has its own unique challenges :-)  

Consider the enterprise server management problem at the moment.  We have thousands of servers and applications, including a variety of monitoring and measuring equipment and applications.  They all typically use a client server style model to report or exchange information with a central server.  They all have their own built in security mechanisms (if you're lucky :-) ).  They all have their own ports, requiring firewall holes punched in a variety of different manners.   All this just lends itself to complexity.

The idea many years ago was to stop doing that sort of individual rubbish and instead have everyone basically place messages into a messaging framework that would in turn handle security, priority and delivery.  Imagine how neat and tidy that would make the managed environment, not to mention the flexibility you could have.

This of course doesn't just apply to monitoring / reporting tools.  A messaging framework has messages travelling in multiple directions.  An authorized conversation could be used to trigger an action on a target server - ie start processes, create userids , reboot host etc etc.  The list is limitless.

So what is the link to virtualisation and cloud computing?   Imagine the power of a messaging framework in such an environment.  An application could create a message to spawn new server instances and they could be routed through to the application that can actually clone systems from templates etc (yes, of course there will be business logic wrapped around this for a variety of reasons).  A business aware event management system may see that a VM is unresponsive from an application perspective and choose to reboot it.    You do not want to have each application being able to directly interface with your cloud management infrastructure,  you want an event/action message created, appropriately approved and subsequently delivered to an application that will perform the desired action correctly and reliably.

Sounds good, so what?   Well it's beginning to appear.  While not technically what I would consider an open system, VMware now has AMQP plugins for their orchestration tool (vCO).   This allows you to achieve the scenario I mentioned above,  an application doesn't need to know how to provision new servers, it simply posts a message and triggers the workflow and would receive a message back when it is complete.  The same could be done as a result of a monitoring event.

The  oVIRT node (upstream of RHEV-H) has the Apache Qpid as part of the stack.  In this case the goal is to implement the Qpid Messaging Framework (QMF), QMFv2 to be more precise.   The architecture behind QMFv2 is basically what I was describing above.

In the case of oVIRT, it is intended that the matahari upstream project will provide a set of management agents that utilize the QMF and the underlying Qpid (AMQP) messaging system - neat stuff!

If you're designing Cloud API's and management tools - think messaging, think messaging framework and let someone else do the heavy lifting for you.



Tuesday, August 30, 2011

specvirt + my hypervisor is better than yours

I've seen a number of presentations recently where companies are putting up SPECvirt data and drawing some interesting conclusions.   I am not a SPECvirt expert but it seems to me that they're not comparing / contrasting the data properly.    Here's an example.


Note:  There is absolutely nothing wrong with this image.  The interpretation by some people is what appears to be wrong.  I've heard a number of speakers (who are *not* associated with redhat by the way) implying that the data shows KVM being substantially better than vmware esx.

Note:  the above is just an example of the type of metrics i've seen, this is not the actual chart in question - it was just one I found to be handy  :-)

So what am I saying here?  From what I understand the above chart is simply a chart of specvirt numbers.  It cannot be used as a comparison between esx and kvm as the underlying hardware is different.

If we look at the specvirt results themselves then there are only a couple of comparisons between KVM and ESX on the same hardware.   Look for
Hewlett Packard Company ProLiant DL580 G7

RHEL 6.1 (KVM) 3802@234
VMWARE ESX 4.1 3723@228

These numbers are pretty close and I haven't seen enough data to understand if the difference between the two is in the noise for the benchmark

I think we can safely say that the performance is probably the same, maybe an edge to KVM.   I think we're likely to see bigger differences within platforms when say for example in KVM you use vhost_net drivers, remove non-required h/w from the guests, huge pages, pci pass-through etc etc to optimize the platform performance - that could easily lead to a 10% performance change.

SPECvirt is really good and is a fantastic tool for determining relative performance impacts of tuning actions within a h/w platform.   Please don't abuse the numbers.

If i'm wrong, please tell me and hopefully tell me why  :-)

Sunday, August 28, 2011

Oracle VM 3.0 hits the streets

It seems many of the hypervisor vendors are displaying their wares at the moment.   Oracle VM 3.0 is out now with some greatly appreciated additional functionality.   Of course anyone who has used certain oracle products on oracle vm will be wondering how you manage pinned cpus in this new environment - more to come i'm sure.


New features include:
  • Distributed Resource Scheduling for capacity management, providing real time monitoring enabling rebalancing of a server pool.
  • Distributed Power Management for reduction of powered-on servers.
  • Centralized network configuration and management, using Oracle VM Manager
  • Server and storage discovery.
  • Xen 4.0 hypervisor
  • Updated Dom0 command and control kernel
  • Supporting up to 160 CPUs and 2 TB memory for physical servers
  • Supporting up to 128 vCPUs for Virtual Machines
  • OCFS2 1.8 cluster file system
  • Support for Open Virtualization Format (OVF) Virtual Machine file format.
  • Browser based Oracle VM Manager GUI
  • Job management framework
  • Extensive event logging
  • Performance statistics for CPU,memory, disk and network for physical server and VMs

Saturday, August 27, 2011

RHEV 3.0 - it's in BETA and almost here!

Yes, that's right.  RHEV 3 is almost here.  It's currently in BETA.   Unfortunately, i've not yet been able to try it out directly, but i've done quite a bit of research into the underlying architecture and have been working with some of the upstream components for a little while now.

The press release reads as follows :

-------------------------------------


Today’s Beta of Red Hat Enterprise Virtualization 3.0 previews several key enhancements, including:
  • Red Hat Enterprise Virtualization Manager is now a Java application running on JBoss Enterprise Application Platform on Red Hat Enterprise Linux
  • An updated KVM hypervisor based on the latest Red Hat Enterprise Linux 6
  • Industry-leading performance and scalability levels, supporting up to 128 logical CPUs and 2TB memory for hosts, and up to 64 vCPUs and 2TB memory for guests
  • A power user portal that allows end users to provision virtual machines, define templates and administer their own environments
  • A RESTful API that allows all aspects of Red Hat Enterprise Virtualization to be managed and configured programmatically
  • New multi-level administrative capabilities, improving product functionality for very large deployments
  • New local storage capabilities
  • An integrated and embedded reporting engine allowing for analysis of historic usage trends and utilization reports
  • SPICE WAN optimization and enhanced performance including dynamic compression and automatic tuning of desktop effects and color depth.  The new version of SPICE also features enhanced support for Linux desktops.



-------------------------------------

It'll be interesting to see how it stacks up against the market leader - vSphere,  performance-wise the hypervisors are definitely on-par, with KVM actually performing a little better according to the specvirt benchmarks.

The 'secret sauce' in this market is the management tools and it's clear there has been some significant effort in this area with the RHEV-M re-platforming.  Hopefully this energy can be re-directed post conversion to increasing the functionality.

There are still some operational quirks in this release, though the installation now on rhel6 looks quite simple (yum install rhevm).  Unfortunately,  you still need a windows client running internet explorer to use the web gui.  I'm led to believe that this requirement for WPF finally disappears with 3.1 as it is replaced with a GWT functional equivalent.

Thursday, June 16, 2011

IDC whitepaper on "KVM for Server Virtualization"

IDC has just released a whitepaper on "KVM for Server Virtualization: An Open Source Solution Comes of Age" which you can find over here in PDF format.

KVM has made tremendous improvements in the 4 years it has been around - yes it has only been 4 years, quite remarkable really!  You can easily argue the vibrant open source ecosystem is the main reason for the rapid development and that is certainly part of the reason, but it doesn't hurt to have companies with a strong commercial interest being part of the community - the IDC report being funded by IBM and IBM contributing 60+ developers to KVM development isn't going to hurt either :)

The hypervisor is only part of the equation.  I'm a big fan of the KVM philosophy of not re-inventing the wheel and using whatever is currently in the Linux kernel - advanced schedulers, memory management, io subsystems all wrapped up in SELinux to provide VM isolation -  a fantastic start for a hypervisor.   Of course, this is only part of the overall virtualisation solution and the big ticket items are the management tools.   KVM has libvirt and to be honest - it's just OK, but if there is aspects of it I don't like (which there are) then i'm free to get involved and submit code, or even suggest features which may capture the imagination of some developers.

Commercial deployments of KVM, such as RHEV from Red Hat build on top of the basic libvirt and libguestfs api's and deliver more robust management solutions - hopefully all of which will become upstream projects so that I can contribute in some small way to those.

Is KVM the equivalent of VMware ESXi - not yet, perhaps when RHEV 3.0 is released then the feature gap will diminish.  Performance-wise the SPECVirt benchmarks show that KVM is at least equal to (and arguably better than) the performance of ESX.

For many companies, KVM through the commercial product RHEV is right for them - right now.   RHEV 3.0 will extend the suitability to many more companies.   I'm looking forward to getting a look at it and of course importantly to me, contributing.

Friday, June 10, 2011

SMEP and KVM – sounds interesting


Recently a patch was dropped into the KVM community – adding support for the Intel SMEP cpu feature (if available on the CPU). I thought to myself, what the hell is SMEP?
According to the Intel Software Developers Manual it is “Supervisor-Mode Execution Prevention” – this sounds like a great thing as the kernel is prevented from executing ‘user data’ in kernel mode – ie. If there is an exploit that delivers a page of data and asks the kernel to execute it then this wont happen and a fault will be triggered. This sounds like a neat piece of work and as it’s all h/w based then there should be little overhead.
Like me, i’m guessing you’re wondering if your system has the SMEP cpu feature then this code will show you. Don’t be disappointed if your cpu doesn’t have it – it’s a very new feature and I can’t even find what cpu’s implement it.
Anyway, it’s a step in the right direction and that future direction will hopefully allow hypervisors to be that little bit more secure from un-trusted VM’s and provide a VM ‘shell’ environment that’s a little more secure for the VM’s. Unfortunately the way things currently stand the usefulness for KVM is unlikely to be immediately realised as intel engineers suggest enabling SMEP without a guest vm’s knowledge is likely to be ‘problematic’.