OpenVZ ( Open Virtualization ) is a software for Linux to virtualize the operating system.
- 3.1 vzctl
- 3.2 vzlist
- 3.3 templates and vzpkg
- 4.1 scalability
- 4.2 density
- 4.3 Mass Management
OpenVZ creates isolated containers for operating systems. All processes these operating systems are processed in a single kernel. The operating systems in the containers are still largely independently. You can for example be independently shut down each other and each has its own root account.
Compared to virtual machines from VMware or para-virtualization technologies like Xen, OpenVZ offers less flexibility in terms of the selection of guest operating systems: both guest and host operating system must be Linux (although different Linux distributions in various virtual environments ( VE) can be used ). On the other hand, the technology of OpenVZ provides better performance, scalability, dynamic resource management, and simplified administration. According to the website of the OpenVZ virtualization effort is only 1-3 % of the total system performance.
OpenVZ is the basis of Virtuozzo, the commercial product from the house of Parallels (formerly SWsoft Inc.). OpenVZ is licensed under the GPL version 2.
OpenVZ consists of kernel and user-level tools.
The OpenVZ kernel is a modified Linux kernel, which introduces the concept of Virtual Environment. The kernel provides the functionality of virtualization, isolation, resource management, and checkpointing.
Virtualization and isolation
Each VE is a separate unit that looks like a physical server from the standpoint of its owner. This means each VE has its own among other
Because all PEs share the same kernel, resource management plays the most important role. In fact, each VE must remain within the allocated resource limits, and must not affect the other PEs. This is precisely the competence of resource management.
The OpenVZ resource management consists of three subsystems: two-level Hard disk quota, CPU scheduler, and user beancounters. All of these resources can be changed during operation, it is for no reboot necessary. If a VE for example, more memory may be allocated, the corresponding parameters " on the fly" can be changed. For VM -based Virtualisiserungslösungen this is not readily possible.
Two-level Hard disk quota
The first level is the per -VE Hard disk quota, the second level is the standard UNIX Hard disk quota per user and per- group within a VE.
To allocate more disk space to a VE, you need to increase only the corresponding Hard disk quota.
The CPU scheduler in OpenVZ is a two-level implementation of fair- share scheduling strategy.
At the first level, the planner, which VE receives the CPU clock decides. This is done cpuunits value corresponding to the per- VE. At the second level the standard Linux scheduler, which process to get the CPU clock in selected VE decides. The standard Linux priorities of processes are as always so used.
The OpenVZ administrator can define different values of cpuunits for different VEs. In this case, CPU time will be distributed proportionally according to the specified values of VEs.
In addition, the possibility is available, the CPU time limit. That is, you can assign, for example, 10 % of the CPU time a VE.
The user beancounters are a set of per -VE resource counters, limits and guarantees. There are approximately 20 parameters which must be carefully chosen to account for all aspects of the VE- functionality. For each CE may only use the allocated resources and have no influence on the host system or other VEs.
The controlled resources are RAM and various in -kernel objects such as IPC shared memory segments, network buffers etc. Each resource can look at / proc / user_beancounters. Here five values for each parameter are displayed: Current utilization, maximum utilization, soft limit, hard limit and fail counter.
The meanings of soft limit and hard limits and depend on different parameters. Generally speaking, when any resource exceeds the limit, the corresponding fail counter is incremented. Thus, the owner of the CE, if any problem occurs in the VE, user_beancounters analyze the output of / proc / and make possible causes identified.
Checkpointing and live migration
Live migration and checkpointing are functions that have been published by OpenVZ mid-April 2006. They make it possible to migrate a VE from one physical server to another without stopping the VE / to restart. The process is known as checkpointing, and the main idea is to freeze a VE and store all the processes in a file. This file can then be transferred to another machine and all processes can be restored there. The whole transmission of the CE only takes a few seconds and thus causes no downtime, but only a slight delay.
The fact that every part of the VE status, such as open network connections is stored, makes the whole migration process for the user completely transparent. During the movement of the CE can run with a database, for example, a transaction that requires a lot of time. In this case, the user does not notice that the database already running on another server.
This feature enables to carry out scenarios such as upgrading a server without rebooting. If a database or other application in a VE needs more RAM or CPU resources, you can simply buy another, better machine, this VE migrate live it and then increase the corresponding limit. If it is necessary, for example to add additional RAM, you can migrate all VEs on a different server, perform the upgrade on the machine and then migrate all VEs back.
OpenVZ has both command-line tools for managing VEs ( vzctl ), as well as tools for managing applications in VEs ( vzpkg ).
Vzctl is a simple high-level command-line tool for managing VEs.
Vzlist displays the list of VEs.
Templates and vzpkg
Templates are pre-made images that are used to generate VEs. A template is a set of packages, and a template cache is a tarball a chroot environment in which all packages are installed. During the execution of vzctl create the tarball is unpacked. This technique makes it possible to produce a PU in a few seconds.
The developers provide template caches for the most common Linux distributions on the project website available for download.
Vzpkg is a set of tools with which the generation of a template cache is significantly simplified. It supports rpm and yum -based repositories. To a template, such as Fedora Core 5 to create, you need a set of ( yum ) repositories, in which there are FC5 packages, and also a set of packages that must be installed. In addition, if it is necessary to adapt a template, and pre -and post- install scripts are available. All parameters ( repositories, package lists, scripts, GPG keys, etc.) shown above are represented as template metadata. With the help of template metadata template cache can be created automatically. One only needs to start the vzpkgcache command. In this case all the specified packages are uploaded to the server and installed into a temporary VE. Then the corresponding tarball is created.
It is also possible to create the template for non- cache RPM based distributions.
The main features of OpenVZ
OpenVZ uses the single- kernel model, and is therefore the way the Linux kernel 2.6 scalable. It thus supports the use of up to 64 CPUs and 64 GB RAM. A single VE can be scaled to the entire host system, ie all CPUs and all the RAM of the host system use. This approach virtualizes the hardware of the CE: The current in the VE operating system no longer accesses directly on the physical hardware of the host system, but uses the interfaces of OpenVZ. In this way it is possible to migrate a server at run time in order to use resources increase or to compensate for hardware failure of the host system.
On the server with hundreds of OpenVZ Virtual Environments can run, their number is limited mainly by the available RAM and the CPU performance.
The administrator of OpenVZ server has access to processes and files from all VEs. This facilitates the mass managing many servers, security updates in the VEs can be done by a simple script. This is an advantage over virtualization solutions such as VMware or Xen that require a manual update for each virtual machine.
The following scenarios are common for all virtualization technologies. The main difference of virtualization at the operating system level is that the virtualization overhead is very low. This is what makes such scenarios very attractive.
Other implementations of OS virtualization are LXC ( LinuX container) and Linux - VServer and FreeBSD Jails and Solaris Containers. However, the VServer technology will be completely replaced by OpenVZ.