Distributed Replicated Block Device

DRBD ® (Distributed Replicated Block Device ) is a free network storage solution software. As a kernel module together with a management application in user space and a script it is used to reflect a block device to a productive ( primary) server in real time to another ( secondary) server. This method is used in order to realize high availability (HA) in the Linux environment, and thus to achieve a good availability of various services. Alternatives are distributed file systems such as this, GlusterFS.

Operation

There shall be provided every write access over the network to the second server. Only when the second server returned the successful write operation to the first server, that announces the end of the application of the write operation (this technique is comparable to a RAID 1 using TCP / IP). If the first server fails, it is reported as inactive. Through a server monitoring software such as heartbeat, the second server take over the function of the first server and continue working with the same data.

Each DRBD component ( locally referred to as partition) has a status, which may be either primary or secondary. Between all systems DRBD creates a connection from the local partition to a virtual device / dev / drbdX, which can not be addressed directly. Write accesses to the primary system are the low-level block device ( partition ) and simultaneously propagated to the secondary system. The secondary system then transmits the data to its own local low-level block device. All read accesses are always performed locally.

If the primary system fails, dealt a cluster management process, the secondary system to the primary system state. If the former primary system is available again, this after a resynchronization device data is usually to generate any unnecessary downtime, continue to run as a secondary system, but can also be placed in the primary status. The algorithm of the DRBD synchronization operates by efficiently that changed data blocks need to be resynchronized again only during the failure, not the whole unit.

In the published in January 2007 Version 8 support for configurations with load distribution was introduced, which allows two systems, individual DRBDs in Lese-/Schreib-Modus as shared storage ( shared storage ) to use. This type of use requires the use of a locking mechanism, the "distributed lock manager".

Advantages over shared cluster storage

Conventional computer cluster systems usually use some kind of shared memory that is used for the cluster resources. However, this approach has a number of disadvantages that bypasses DRBD.

  • Shared memory ( shared storage ) typically bring a single point of failure ( single point of failure ) with itself, since both cluster systems from the same shared memory are dependent. When using DRBD there is no danger here, since the required cluster resources are replicated locally and not on a possibly cut correspond shared memory. However, you can work in modern SAN solutions with mirroring capabilities, which eliminate these previously unavoidable flaw.
  • Shared memory is usually addressed through a SAN or a NAS, which requires a certain amount of overhead in read access. In DRBD, this effort is significantly reduced because read accesses always take place locally.

Applications

DRBD works within the Linux kernel block-level and is therefore transparent for it fitting end layers. DRBD can thus be used as a basis for:

  • Conventional file systems
  • Shared cluster file systems such as GFS or OCFS2
  • Another logical block device such as LVM
  • Any application that supports direct access to a block unit.

DRBD -based clusters are used to expand such as file servers, relational databases (such as MySQL) and hypervisor / server virtualization (such as OpenVZ ) to synchronous replication and high availability.

Inclusion in the Linux kernel

The DRBD authors found In July 2007 the software of the Linux developer community for possible future inclusion of DRBD in the official Linux kernel. After two and a half years DRBD has been added to the kernel 2.6.33, which was released on 24 February 2010.

DRBD is open source

The commercially licensed version of DRBD was merged in the first half of December 2008 with the open source version and released under the GNU General Public License. Since the resulting version 8.3 it is possible to mirror the database on a third node. The maximum limit of 4 TiByte per unit was increased to 16 TiByte.

DRBD today

  • Today there are no more size limitation per DRBD device.
  • A list of which versions are used by DRBD, and how big is the biggest DRBD volumes are found on the Usage page of the project.
293879
de