What is DRBD?

DRBD refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1. In the illustration above, the two orange boxes represent two servers that form an HA cluster. The boxes contain the usual components of a Linux kernel: file system, buffer cache, disk scheduler, disk drivers, TCP/IP stack and network interface card (NIC) driver. The black arrows illustrate the flow of data between these components. The orange arrows show the flow of data, as DRBD mirrors the data of a highly available service from the active node of the HA cluster to the standby node of the HA cluster.

What is HA?

The upper part of this picture shows a cluster where the left node is currently active, i.e., the service's IP address that the client machines are talking to is currently on the left node. The service, including its IP address, can be migrated to the other node at any time, either due to a failure of the active node or as an administrative action. The lower part of the illustration shows a degraded cluster. In HA speak the migration of a service is called failover, the reverse process is called failback and when the migration is triggered by an administrator it is called switchover.

What DRBD Does?

Mirroring of important data

DRBD works on top of block devices, i.e., hard disk partitions or LVM's logical volumes. It mirrors each data block that it is written to disk to the peer node.

From fully synchronous

Mirroring can be done tightly coupled (synchronous). That means that the file system on the active node is notified that the writing of the block was finished only when the block made it to both disks of the cluster. Synchronous mirroring (called protocol C in DRBD speak) is the right choice for HA clusters where you dare not lose a single transaction in case of the complete crash of the active (primary in DRBD speak) node.

To asynchronous

The other option is asynchronous mirroring. That means that the entity that issued the write requests is informed about completion as soon as the data is written to the local disk. Asynchronous mirroring is necessary to build mirrors over long distances, i.e., the interconnecting network's round trip time is higher than the write latency you can tolerate for your application. (Note: The amount of data the peer node may fall behind is limited by bandwidth-delay product and the TCP send buffer.)