Virtualization is a
widely adopted solution. Around 75 percent of organizations are using or
evaluating virtualization and seeing its advantages for server consolidation,
centralized management, and cost-reduction due to reduced hardware, power and
cooling requirements. . As these benefits drive profit, companies want to
virtualize more demanding workloads. They want more powerful and flexible
virtualization solutions that are better integrated with their management
tools. Wide adoption of 64-bit, multi-processor, multi-core servers spurs
demand for virtual machines that are better able to take advantage of more scalable
server hardware.
In light of these
developments, Microsoft created Hyper-V, a next-generation, hypervisor-based virtualization
technology that provides a reliable virtualization platform and and integrated
management that enable customers to virtualize their infrastructure and reduce
costs.
Key Benefits
Windows Server 2008
Hyper-V technology simplifies the interaction between hardware, operating
systems, and virtual machines, while simultaneously strengthening the core
virtualization components.
Reliability
Hyper-V
provides better reliability and greater scalability that allows you to
virtualize your infrastructure. It has a thin micro-kernelized hypervisor
architecture with minimal attack surface. This hypervisor does not include any
third party device drivers. It leverages the vast majority of device drivers
that have already been built for Windows. Hyper-V is also available as a Server
Core role.
Strong Isolation
Server
virtualization enables potentially resource- and control-intensive applications
to coexist on the same server. Virtual servers must be able to do their work
with as much flexibility as possible, leveraging as much hardware capacity as
they need, without conflicting with other virtual servers.
Hyper-V
works with virtualization-aware hardware to tightly control the resources
available to each virtual machine. For example, virtual machines are isolated
in a way that gives them very limited exposure to other VMs on the network or
on the same computer.
Security
Security is a central
challenge in every server solution. Virtual servers are at least as exposed as
their stand-alone counterparts and, in many ways, more exposed. For example,
multiple server functions on one computer can mean more administrators have
access to that computer. Third-party software and drivers can present security
risks as well, so it’s important to make sure that, if a virtual machine is
compromised, it has limited exposure to other virtual machines on the same
physical server.
Virtualization
provides an opportunity to increase security for all server platforms. Features
that
Hyper-V uses to enhance security include:
Hyper-V uses to enhance security include:
- Enabling VMs to take advantage of hardware-level security features, such as execute disable bit (preventing execution of the most prevalent viruses and worms), available in newer server hardware.
- Providing strong role-based security to prevent exposure of secure VMs through shared servers.
- Integrating network security features that enable automatic Network Address Translation (NAT), firewall, and Network Access Policy protection (quarantine).
- Reducing the attack surface through a streamlined, lightweight architecture.
Performance
Performance
advances and integration with virtualization-aware hardware enable Hyper-V to
virtualize much more demanding workloads than previous virtualization solutions
and to give them more resources for greater scalability.
Performance
advancements include:
- Speed enhancements through lightweight, low-overhead virtualization hypervisor architecture.
- Multi-core support, giving each VM access to as many as four logical processors.
- Enhanced 64-bit support, enabling VMs to run 64-bit operating systems and to access very large amounts of memory (up to 64 GB per VM), enabling more resource-intensive workloads and helping avoid slowdowns due to paging.
- Microkernelized hypervisor architecture, enabling VMs to cut out layers of emulation and drivers, working more closely with virtualization-aware hardware.
- A high-performance, hardware-sharing architecture that optimizes data transfer between physical hardware and virtual machines.
New Microkernelized Hypervisor Architecture
Hyper-V uses 64-bit
hypervisor-based technology to give VMs running Windows Server 2008, Windows
Server 2003, specific Linux distributions, or Xen-enabled Linux the ability to
work as closely with CPUs and memory as possible in a shared environment,
vastly increasing performance.
Hypervisor-based
virtualization is the latest stage in virtualization technology’s evolution,
from emulated environments, which began more than 30 years ago, to today’s
hardware-enhanced, close-to-bare-metal virtualization.
Basic
virtualization (Type 2 virtual
machine) places a thick, relatively slow layer of abstraction between hardware
and guest operating systems. This approach is called hosted virtualization. The virtual machine monitor (VMM) runs as an
application on an operating system, and each VM runs on top of the VMM. As a
simplified example of the overhead involved in this type of virtualization, a
hardware call from a guest operating system’s device drivers:
- Goes first to emulated virtual hardware managed by the VMM.
- The VMM routes it to the operating system.
- The operating system routes it to the hardware’s device driver.
- The hardware’s device driver routes it to the hardware.
The process happens
in reverse for any responses from the hardware.
Newer, Hybrid virtualization architectures,
including that used in Virtual Server, run side by side with server operating
systems.
In Type 1 virtual machine monitors, the
hypervisor sits at the level closest to the hardware, sometimes called the bare-metal level.
There are two kinds
of hypervisor architectures – monolithic hypervisors and micro-kernelized
hypervisors (see graphic below). The monolithic hypervisor model still places
large amounts of code between hardware resources and virtual machines, because
the virtual machine monitor emulates hardware for its VMs. When a guest
operating system makes a hardware call through its device drivers:
- The VMM’s emulated hardware intercepts the call.
- The VMM routes it to the device drivers, necessitating numerous expensive context switches.
- The device drivers route it to the physical hardware.
This approach,
called a monolithic hypervisor, includes hardware drivers in
the hypervisor. Examples of monolithic hypervisors include VMware’s ESX Server.
Windows Server 2008
Hyper-V uses a micro-kernelized hypervisor model. In a micro-kernelized
hypervisor, the only layer between a guest operating system and the hardware is
a streamlined hypervisor with simple partitioning functionality. The hypervisor
has no third-party device drivers. In addition to improved performance, it has
an inherently more secure architecture with a minimal attack surface. The
drivers required for hardware sharing reside in the host operating system,
which provides access to the rich set of drivers already built for Windows.
Figure 1. Approaches to Hypervisors:
Monolithic Hypervisor’s contain its’ own driver stack as a part of the
hypervisor; Microkernelized Hypervisor’s has a minimal hypervisor layer and
leverages the parent partition and provides an inherently more secure
architecture with minimal attack surface
Leveraging Virtualization-Aware Hardware
The new generation
of 64-bit server hardware includes virtualization-aware processors.
Intel® Virtualization Technology and AMD Virtualization (AMD-V) are able to manage some memory- and hardware-sharing functions that would otherwise be left to the server’s virtualization management software.
Intel® Virtualization Technology and AMD Virtualization (AMD-V) are able to manage some memory- and hardware-sharing functions that would otherwise be left to the server’s virtualization management software.
Hyper-V requires a processor with
hardware-assisted virtualization functionality, enabling a much more compact
virtualization codebase and associated performance improvements.
With the
availability of these new processors and a new, hypervisor-based virtualization
architecture, Hyper-V is able to put virtualized applications as close to bare
metal as possible. This enables virtualized applications to take advantage of
features like multi-core processing that would be available on a standalone,
physical server but haven’t up to this point been available inside a virtual
machine.
Benefits of the new
approach include previous solutions’ single-processor/single-core VM being
supplanted by support of up to four cores per VM with Hyper-V.
Table 1
Virtual Server
|
Hyper-V
|
Processor Support
1 processor/core per VM
|
Processor Support
Up to 4 logical processors per
VM
Up to 16 processing cores in
the physical machine.
|
Type of Virtual Machines Supported
32-bit VMs
|
Types of Virtual Machines Supported
32-bit VMs
64-bit VMs
32-bit and 64-bit VMs running
simultaneously
|
Maximum Memory per Virtual Machine
3.6 GB
|
Maximum Memory per Virtual Machine
Up to 64 GB
|
Simplified Management with Familiar Tools
Microsoft
has added functions to Hyper-V that enhance management capabilities:
- Simplifying management by replacing product-specific tools (browser interface) with industry-standard tools (Microsoft Management Console [MMC] interface)
- Automating tasks and event response to minimize human interaction wherever possible
- Performing extensive monitoring to keep administrators aware of issues before the issues become problems
From
a network management standpoint, virtual machines should be easier to manage
than physical computers. To this end, Hyper-V includes many management features
designed to make managing virtual machines simple and familiar, while enabling
easy access to powerful VM-specific management functions.
Hyper-V
can be managed in three ways:
1)
MMC interface
2)
Microsoft System Center
3)
Third party management tools
MMC Interface
Hyper-V
moves from the browser-based remote management used in Virtual Server to a
standard
MMC 3.0 interface. With Windows Server 2008 Hyper-V, VMs and servers are configured through a familiar and widely used management interface. Benefits of this standardized approach include:
MMC 3.0 interface. With Windows Server 2008 Hyper-V, VMs and servers are configured through a familiar and widely used management interface. Benefits of this standardized approach include:
- Broad industry support, reducing the learning curve experienced when moving from managing physical computers to managing VMs.
- Enabling VM management with enhancements from third-party management console plug-ins.
- Ability to enhance the MMC with user-created Windows® PowerShell™ commandlets.
Microsoft System Center
Microsoft
System Center, a suite of system and server management tools, manages all of
the Microsoft virtualization offerings as well as networks’ physical resources.
System Center provides a single set of integrated tools to manage both physical
and virtual environments. System Center is designed to help businesses create
self-managing dynamic systems, where the management and monitoring tools are
able to diagnose and address problems with as little human interaction as
possible.
System
Center includes a virtualization-specific management tool, System Center
Virtual Machine Manager, as well as virtualization functions in its other
tools.
System Center Virtual Machine Manager
System
Center Virtual Machine Manager (SCVMM) provides centralized and powerful
management, monitoring, and self-service provisioning for virtual machines.
SCVMM
host groups are a way to apply policies and to check for problems across
several VMs at once. Groups can be organized by owner, operating system, or by
custom names (such as “Development” or “Production”).
In
the SCVMM interface, selecting a virtualization host server results in a
displayed list of its VMs. Select a specific VM to show its CPU and memory
usage, as well as a live-updating thumbnail. The interface also incorporates
Remote Desktop Protocol (RDP); double-click a VM to bring up the console for
that VM—live and accessible from the management console.
System Center Virtual Machine Manager Main Features
Feature
|
Description
|
Host configuration
|
Host setup and configuration can be automated,
including global settings, such as storage, like Virtual Hard Disk (VHD)
paths and VM Additions.
|
Virtual machine creation
|
A wizard-based user interface creates VMs, enabling
rapid VM creation, including physical-to-virtual conversion (P2V) and
templates.
The virtual-to-virtual (V2V) conversion in SCVMM can
convert VMware ESX VMs (VMDK format) to Hyper-V VMs (VHD format).
SCVMM includes the ability to save VM definitions as
templates for rapid deployment.
|
Library management
|
SCVMM can store and manage offline VMs, templates, and
ISO images, enabling rapid VM deployment. It can create, update, delete, and
store objects in the library without launching the associated VMs.
|
Virtual machine placement
and deployment
|
SCVMM can provide recommendations for where to place
VMs, based on host capacity and utilization, facilitating movement (including
Quick Migration) of VM files over a local area network (LAN) or storage area
network (SAN).
|
Monitoring and reporting
|
SCVMM provides a centralized view of all VMs in the
environment and their status. The view can be customized by host and VM
groupings, scalable to thousands of VMs.
Integrated tools provide for complete reporting and
health monitoring, including VMs and physical machines. Standard reports
include consolidation candidates, utilization trending, and optimization
opportunities.
|
Rapid recovery
|
VM snapshots and live backup help departments quickly recover
from outages.
|
Self-service provisioning
user interface
|
Instead of requiring an administrator to create and
configure VMs, the SCVMM self-service interface enables users to create and
delete VMs themselves. Administrators set the rules, boundaries, and
permissions for self-service provisioning.
|
Automation
|
SCVMM contains a completely scriptable user model based
on Windows PowerShell and includes the ability to view the
Windows PowerShell script for each action—enabling administrators to develop scripts for complex actions. |
Microsoft System Center Operations Manager
Microsoft
System Center Operations Manager (SCOM) 2007 monitors the health and
performance of physical and virtual workloads. Administrators have powerful
tools, such as at-a-glance status, highly customizable alerts, and integrated
configuration management, to respond to issues immediately and can enable
automated response without administrator involvement. For example, when a
virtual machine shows network saturation, SCOM might respond with a script to
add a network adapter and restart the VM with more available bandwidth. A
virtual machine overloading its processor or paging excessively could get
additional logical processors or memory.
Third party management solutions
In
addition to the above options, Hyper-V provides APIs that can be used by third
party management solutions. This enables customers to use third party management
solutions to manage Hyper-V.
Integrated Virtualization
Microsoft offers
customers a complete set of virtualization products, from the data center to
the desktop. As discussed, all assets – both virtual and physical – can be
managed with our System Center management platform.
Hyper-V is a key component
of Microsoft’scomplete virtualization solution suite.Virtualization is a key
pillar to the Microsoft Dynamic Systems Initiative (DSI), embedding operational
knowledge in the management tools, and enabling the system to manage and even
heal itself. (See “Microsoft System Center Integration and the
Dynamic Systems Initiative,” below.)
Figure 2. The
Microsoft end-to-end virtualization strategy enables centralized management for
virtual and physical assets through Microsoft System Center.
Presentation
virtualization through Microsoft Terminal Services enables remote users to
access applications and operating systems hosted from remote locations. A
common usage model is accessing the corporate data from home or while
traveling, giving the remote user the ability to manipulate files, log in to
applications that require hardware locks on the desktop PC, and use other
resources that wouldn’t otherwise be available. Presentation virtualization has
the added benefit of enabling resource-intensive applications to be used
through lower-power portable computers or other computers that would otherwise
be incompatible, even those running different operating systems.
Application virtualization with Microsoft
SoftGrid® insulates applications running on the same operating system, helping
to eliminate potential conflicts and enabling rapid provisioning. An
application that would normally update the registry, for example, updates a
virtual registry, so the system is able to meet the application’s requirements
without impinging on other applications. Applications are not “installed” in
the traditional sense, so they can be set up and removed more quickly than
through typical setup and uninstall procedures, including custom options that
would otherwise have to be configured manually.
Desktop virtualization with Microsoft
Virtual PC enables users to run guest operating systems. It is commonly used
for vertical applications that require different operating systems and testing.
System Requirements
Host Operating Systems
Hyper-V
is an available feature of Windows Server 2008 Standard x64, Windows Server 2008
Enterprise x64, or Windows Server 2008 Datacenter x64 editions. The Server Core
installation option for these editions of Windows Server 2008 can also install
the Hyper-V role.
Clustering
features, including Quick Migration, require Windows Server 2008 Enterprise or
Windows Server 2008 Datacenter x64 editions in the parent partition.
Windows Server 2008 Datacenter x64 editions in the parent partition.
Guest Operating Systems
Hyper-V
supports Windows Server 2008, Windows Server 2003 and specific Linux distributions running as guest
operatring systems. For a complete list and configurations of supported guest
operating systems running Hyper-V please refer to the datasheet.
Processors
Hyper-V
requires processors with hardware-assists from AMD and Intel.: AMD-V or Intel
VT processors.
Hardware
Data Execution Protection (DEP) must be enabled. Hyper-V requires that hardware
data protection is enabled: Intel XD bit (execute disable bit) or AMD NX bit (no
execute bit).
Shared Storage for Quick Migration
Quick Migration requires shared storage in
the form of either a SAN (Internet Small Computer
System Interface [iSCSI] or Fibre Channel) or Serial Attached SCSI.
Windows Server 2008 clustering is no longer supported by means of parallel
SCSI.
There
are four key usage scenarios for Hyper-V.
They are:
Server
consolidation
Dev/Test
environments
Business
Continuity
Dynamic
Datacenter
The biggest driver for adopting virtualization technology is
server consolidation. Businesses are under pressure to automate management and
reduce costs,
while retaining and enhancing competitive advantages, such as reliability,
scalability, and security.
Hyper-V is ideal for server consolidation in both the data
center and remote sites, enabling organizations to make more efficient use of
their hardware resources. It also enables IT organizations to enhance their
administrative productivity and to rapidly deploy new servers to address
changing business needs.
Key Consolidation Features
Table 2
Feature
|
Description
|
Broad guest
operating system support
|
Guest operating systems
supported include Windows, specific Linux distributions, and Xen-enabled
Linux.
In addition to
supporting the operating systems above with synthetic hardware, VMs in Hyper-V
can run many other operating systems with hardware emulation, including all
versions of DOS, Windows, and Windows Server.
|
Hardware
virtualization and older-version hardware emulation
|
VMs based on specific
virtualization-aware operating systems (Windows Server 2008, Windows Server
2003, and specific Xen-enabled Linux distributions) interact with high-performance
synthetic devices that have no physical counterpart (for example, “Windows
Display Adapter”). Other operating systems interact with emulated hardware
that acts like specific devices (for example, S3 Trio64 SVGA adapter).
|
P2V: Physical-to-virtual
conversion (SCVMM)
|
P2V enables running
physical servers to be converted to virtual machines, with minimal downtime.
|
V2V: Virtual-to-virtual
conversion (SCVMM)
|
The virtual-to-virtual
conversion in Hyper-V can convert VMware ESX VMs (VMDK format) to Hyper-V VMs
(VHD format).
|
Quick
Migration (SCVMM)
|
The Quick Migration
feature in SCVMM enables running virtual machines to be moved from one server
to another, with minimal downtime.
|
CPU resource
allocation
|
CPU resource allocation supports
both weighting and constraint methods for fine-grained control.
·
Multithreaded
for highly scalable performance.
·
Number
of cores in a VM:
·
Each
virtual machine can use up to 100 percent of a single host processor (up to
16 total processing cores per system).
·
On
hyper-threaded systems, the single host processor is a logical processor.
·
Multiple
virtual machines can execute concurrently to make use of multiple host
processors.
·
The
number of virtual machines that can be hosted on any server depends on the:
·
Combined
processor, memory, and I/O load the virtual machines put on the host.
·
Processor,
memory, and I/O capacity available on the host system.
·
Virtual
processor resources can be changed using industry-standard tools, the Hyper-V
MMC management interface, or WMI scripting (processor change requires
restarting the VM).
·
Hyper-V
supports both weight-based and constraint-based CPU resource allocation for
balanced workload management.
·
The
relative weight given to the resource needs of this virtual machine is based
on comparisons with the needs for all other virtual machines. A virtual
machine with a higher relative weight is dynamically allocated additional
resources as needed from other virtual machines that have lower relative
weights. By default, all virtual machines have a relative weight of 100, so
that their resource requirements are equal, and none is given preference.
·
Capacity
and weight algorithms operate concurrently:
·
Contention
can occur for the maximum system capacities.
·
Relative
weights indicate how to allocate resources during contention.
|
Memory
resource allocation
|
Hyper-V
enables flexible memory configuration on a per-virtual machine basis.
·
Support
is included for non-uniform
memory access-aware (NUMA-aware)
scheduling and memory allocation, reducing memory bus contention on
multi-processor systems.
·
On non-NUMA
systems, Hyper-V relies on the host operating system scheduler.
|
PXE
Boot
|
Virtual network cards in Hyper-V support Pre-Boot
Execution Environment (PXE). This network boot allows customers to provision
their virtual machines in the same ways that they do their physical servers.
Note: To take advantage
of this feature, the PXE infrastructure needs to be installed on the host
network.
|
Active
Directory integration
|
Active Directory® directory service allows the same
directory management features to be used for virtual machines as are used for
physical machines, by providing a centralized repository for hierarchical
information about users and computers on the network. Active Directory
incorporates significant improvements in management and performance in
Windows Server 2008, which can be leveraged through virtual machines hosted
by Hyper-V.
Integration with Active Directory enables delegated
administration and authenticated guest access. Hyper-V enables fine-grained
administrative control over virtual machines with per-virtual machine Access
Control Lists (ACLs) that can be managed from within the Active Directory
Group Policy Management Console. Event logs are integrated with Active
Directory and Microsoft Management Consoles.
|
Windows
Server Core option
|
Hyper-V is available as a Windows Server 2008 Server
Core role, facilitating higher uptime due to fewer mandatory reboots for OS
patches. Hyper-V can also achieve higher VM density when consolidating core
infrastructure workloads by using Windows Server Core as a guest OS. The
reduced disk and memory footprint of Server Core can help achieve higher VM
densities on consolidated servers.
|
Hyper-V enables businesses to consolidate their test and
development servers and to automate the provisioning of virtual machines.
Customers across all business segments are looking for ways
to decrease their costs and to accelerate application and infrastructure
installations and upgrades, while delivering comprehensive quality assurance.
To achieve testing coverage goals prior to going into production, multiple
challenges must be overcome:
·
Network
operations: A test network that is incorrectly configured could endanger
production networks.
·
Developer
productivity: Developer productivity should not be wasted on time-consuming
administrative tasks, such as configuring test environments and installing
operating systems.
·
Server
operational and capital costs: High-quality application test coverage requires
replicating production computing environments, which in turn need costly
hardware and human resources. This extra resource demand can pose risks to
budgets and schedules
Virtual machine technology was developed more than 30 years ago
to address some of the challenges first encountered during the mainframe era,
enabling side-by-side testing and production partitions on the same system. Now,
Hyper-V enables better test coverage, developer productivity, and user
experience. The memory and processor scalability inherent in Hyper-V 64-bit
architecture supports enterprise test scenarios.
Developers can also leverage Hyper-V as an efficient tool to
simulate distributed applications on a single physical server. Deploying and
testing distributed server applications typically requires quantities of
available hardware resources and a great deal of time to configure the hardware
and software systems in a lab environment, to simulate a desired scenario.
Hyper-V is a powerful time- and resource-saving solution
that optimizes hardware and human resource utilization in distributed server
application development scenarios. Hyper-V enables individual developers to
easily deploy and test a distributed server application using multiple virtual
machines on one physical server. Combining the robust features in Hyper-V, such
as disk hierarchy and virtual networking, with the value of machine
consolidation gives developers a powerful and efficient way to simulate complex
network environments. The result is a development environment solution that is very
time and cost effective because less hardware, less real estate, and less time
are required for build-out.
Key Software Testing and Development Features
Table 3
Feature
|
Description
|
Broad guest
operating system support
|
Guest operating systems
supported include Windows Server 2008, Windows Server 2003, and specific
Xen-enabled Linux distributions.
In addition to
supporting the operating systems above with synthetic hardware, VMs in Hyper-V
can run many other operating systems with hardware emulation, including all
versions of DOS, Windows, and Windows Server.
|
Self-service
portals
|
System Center Virtual Machine Manager enables
developers and testers to create and destroy VMs from a configuration library
instead of requiring administrator intervention.
|
Flexible resource control
|
VMs can also take advantage of flexible resource
control, enabling testers to assign memory and processor resources that best
fit the test or development scenario.
|
VM
Snapshots
|
With the Snapshot feature of Hyper-V, a VM can be reset
to a previous state.
|
Hyper-V can be part of a disaster recovery plan that
requires application portability and flexibility across hardware platforms.
Consolidating physical servers onto fewer physical computers running virtual
machines decreases the number of physical assets that can be damaged or
compromised in event of a disaster. During recovery, virtual machines can be
hosted anywhere, on host machines other than those affected by a disaster,
speeding up recovery times and maximizing organization flexibility.
Key Business Continuity and Disaster Recovery Features
Table 4
Feature
|
Description
|
High
availability through host and guest clustering
|
Hyper-V enables
clustering of guest operating systems and host computers, enabling a variety
of high-availability scenarios. Clustering host computers offers a
cost-effective means of increasing server availability, enabling failover of
virtual machines among the Hyper-V hosts in the cluster. Using Hyper-V,
organizations can create a high-availability virtual machine environment that
can effectively accommodate both planned and unplanned downtime scenarios,
without requiring the purchase of additional software tools.
For example, IT
administrators can effectively anticipate host server restarts if required by
system updates. With a properly configured Hyper-V host cluster, running
virtual machines can be migrated to another host in the cluster with minimal
downtime.
In unplanned downtime
scenarios, such as hardware failure, the virtual machines running on the host
can be automatically migrated to the next available Hyper-V host.
Guest clustering allows
cluster-aware applications to be clustered within virtual machines across Hyper-V
host computers.
|
Live
backup
|
Hyper-V virtual machines and their data can be
automatically backed up without experiencing downtime (if the guest OS
supports Volume Shadow Copy Service). If a server stops responding, its VMs
can be restored and started on any other host server, minimizing service
interruptions.
Tape backup processes take advantage of virtual tape
drive functionality in
Hyper-V. For example, if a server incorporates a script to automatically back up its data to a tape drive, that process can still be used when the server is converted to a virtual machine. |
Health
monitoring
|
Hyper-V leverages comprehensive integration with
monitoring tools, like Microsoft System Center Operations Manager (SCOM), to
spot and respond to issues before they become larger problems.
|
Quick
Migration
|
Quick Migration enables VMs to be moved to other
servers, automatically or manually, with minimal downtime. Note: Quick
Migration is available only in the Enterprise and Datacenter editions of
Windows Server 2008.
When monitoring tools like SCOM identify important but
non-urgent problems with servers—a system reaching its maximum capacity, for
example—integrated management tools can automatically move that server to
another physical computer, even at another location.
|
Windows
Server Core option
|
Hyper-V is available as a Windows Server 2008 Server
Core role. Windows Server Core as a guest OS helps facilitate high
availability for core infrastructure roles. The reduced disk and memory
footprint of Windows Server Core will facilitate faster Quick Migrations and
faster cluster failovers of VMs based on Windows Server Core.
|
Scenario: Enabling the Dynamic Data Center
Data centers face increased pressure to optimize hardware
and facilities utilization, while increasing performance and leveraging
business intelligence. Hyper-V gives data centers the agility to respond to
changing needs, and the power and flexibility to design for the future.
Core features, such as dynamic hardware management, Quick
Migration of running VMs with minimal downtime, and 64-bit, multi-processor
support, enable data centers to rely on virtual machines for even the most
resource-intensive workloads.
Hyper-V helps realize the dynamic data center vision of providing
self-managing dynamic systems and operational agility. Combining business
processes with System Center Virtual Machine Manager enables a data center to
rapidly provision new applications and dynamically load balance virtual
workloads across different physical machines in their infrastructure and to progress
toward self-managing dynamic systems.
Microsoft System Center Integration and the Dynamic Systems Initiative
Hyper-V integrates with Microsoft System Center (MSC), a new
generation of dynamic management tools designed to support the Dynamic Systems
Initiative (DSI). MSC provides IT Professionals with the tools and knowledge to
help manage their IT infrastructure, embedding operational knowledge in the
management tools, and enabling the system to manage and even heal itself.
The essence of Microsoft DSI strategy is to develop and
deliver technologies that enable businesses and people be more productive, and
to better adapt to dynamic business demands. There are three architectural
elements of the dynamic systems technology strategy:
- Design for Operations to capture the diverse knowledge of people, such as business architects, application developers, IT Professionals, and industry partners, by embedding it within the IT infrastructure itself, using system models.
- Knowledge-Driven Management enables systems to capture desired states of configuration and health in models, and to use this inherent knowledge to provide a level of self-management to systems.
- Virtualized Infrastructure helps achieve greater agility and leverage existing infrastructure by consolidating system resources into a virtual service pool. Virtualized infrastructure makes it easier for a system to quickly add, subtract, move, or change the resources it draws upon to do its work, based on business priorities and demands.
These three elements are the foundation for building dynamic
systems. Virtualized Infrastructure mobilizes the resources of the
infrastructure, Knowledge-Driven Management is the mechanism for putting those
resources to work to meet dynamic business demands, and Design for Operations
ensures that systems are built with operational excellence in mind.
For more information about DSI, see: www.microsoft.com/dsi.
Key Dynamic Data Center Features
Table 5
Feature
|
Description
|
Broad guest
operating system support
|
Guest operating systems
supported include Windows Server 2008, Windows Server 2003, and specific
Xen-enabled Linux distributions.
In addition to supporting
the operating systems above with synthetic hardware, VMs in Hyper-V can run
many other operating systems with hardware emulation, including all versions
of DOS, Windows, and Windows Server.
|
Automated VM reconfiguration
|
The VM configuration capabilities in Hyper-V enable
advanced management tools to reconfigure VMs with additional storage, memory,
processor cores, and networking (minimal downtime required to restart the VM).
A dynamic data center uses this technology not only to respond to problems,
but also to anticipate increased demands.
The dynamic data center can give a Web server
additional processing power in anticipation of a Web-based promotion, for
example. If the payroll system always slows down during the last few days of
the month, the system can automatically add capacity for that period and free
up those resources for other VMs after payroll processing is done.
|
Quick Migration
|
The Hyper-V Quick Migration feature enables running VMs
to be moved to other servers, with minimal downtime. Dynamic data centers
leverage Quick Migration to move workloads to servers with applicable
capabilities for their current needs. A server providing application updates,
for example, could migrate to a more powerful server in anticipation of a
company-wide software update.
|
Utilization counters
|
Hyper-V utilization
counters provide server administrators with detailed server load and
performance information to facilitate planning and analysis, as well as
charge-back metrics.
|
Hyper-V is a reliable and cost-effective server
virtualization technology for the Windows Server 2008 platform.
The move by Microsoft to hypervisor-based, hardware-assisted
virtualization vastly improves reliability and scalability for virtual servers,
enabling even the most demanding workloads to be run in dynamic virtual
machines.
The industry-standard management tools in Hyper-V enable
system administrators to manage virtual servers and physical servers in the
same familiar, widely supported interface.
IT departments use Hyper-V to:
·
Consolidate
infrastructure, application, and remote site server workloads. Hyper-V is ideal
for server consolidation in both the data center and remote sites, allowing
organizations to make more efficient use of their hardware resources. It also
helps IT organizations to enhance their administrative productivity and to
rapidly deploy new servers to address changing business needs.
·
Automate and
consolidate software test and development environments. Hyper-V enables
businesses to consolidate their test and development server farm and to automate
the provisioning of virtual machines.
·
Provide for
business continuity and disaster recovery. Hyper-V can be used as part of a
disaster recovery plan that requires application portability and flexibility
across hardware platforms.
·
Support
the drive to create dynamic, self-managing systems. Hyper-V gives data
centers the agility to respond to changing needs and the power and flexibility
to design for the future.
No comments:
Post a Comment