| drbd-utils | Distributed Replicated Block Device |
| drbdmanage | DRBD distributed resource management utility |
| yast2-drbd | YaST2 - DRBD Configuration |
pluto:~ #
2.2 Requirements
"The system have to run!“
Admittedly, this sentence sounds rather stupid with regard to the minimum requirements of an installation - especially in a technical book. However, the fact is that the DRBD software does not have any special minimum requirements for the equipment of the cluster nodes, which is due to the fact that the DRBD function is integrated into the Linux kernel.
Equip your cluster nodes to meet your requirements and ensure that the high availability application runs properly on the deployed platform. With regard to synchronization, there are a few more notes.
To create this book, I installed two virtual machines on a laptop that had the "fabulous" memory size of 4 GB and a quad processor running at 2.16 GHz.
This might be enough for a workstation, laptop or desktop, but for a server or host this computer is a bit tight.
The two “VMs” on this laptop each have 1 CPU and 1 GB RAM.
In our case, the LAN connection is established by a single LAN adapter with a speed of 10 Mb/s - sufficient for home use, for a server …well ….
As I said, for a server environment that has a little more work to do than present "It works" via apache, this hardware configuration would be considered lean. But to show that it basically works, this configuration is still sufficient.
Depending on the size of the disk partitions you want to include in this RAID 1, you should consider setting up a separate LAN for disk synchronization (backend LAN). However, you should be careful about the speed of synchronization, otherwise your computers will be busy with disk synchronization only. But more about that later.
I also don’t want to write a novel about the minimum configuration of hosts here, others have done that before me. I also realize that some users consider 10 Mb/s to be clearly too slow for their home network.
I want to show that DRBD works even with absolutely minimal equipment. Which brings us back to cost savings, especially for small businesses.
3 Preliminary considerations
Before we take a closer look at the basic configuration of a two-node cluster, there are some basic considerations. If you have already made your selection or have special requirements, you can safely skip this chapter - but you do so at your own risk.
I myself am not one for reading through endless introductions, and I know colleagues who read the introductions very carefully and then didn’t know what to do when it came time to implement them.
The important thing for me in the preliminary considerations is to avoid unnecessary work, so that you don’t have any downtime at the end of a test run or even in a running, productive cluster.
And nothing is more deadly to a cluster than not being available.
That’s why the old do-it-yourself motto applies here, too:
Measure first, then cut!
3.1 Disk drive – physically vs. LVM
So, let’s first take a look at how the disk drive should be "designed".
Let’s first take a "physical disk device", i.e. an additional disk partition, in addition to the "classical" partitions like swap, root (/) and /home.
This solution has the advantage that there is no additional "virtualization layer" holding up processing operations, which could lead to performance degradation in case of doubt.
The disadvantage is that a subsequent increase or decrease in size, can only be carried out with increased effort if hardware actually has to be replaced.
Using Logical Volume Manager, or LVM for short, gains more headroom, but adds a virtualization layer on very tight systems, which can lead to the aforementioned performance degradation.
Both types of disks work with DRBD!
In the systems I have set up, I generally use Logical Volume Manager because the advantage of adding disks after the fact outweighs the disadvantage of performance degradation.
3.2 Filesystem on the disk device
In principle, a DRBD could also be used as a RAW device. Whether and which file system "runs" on the DRBD device does not really matter. Nevertheless, I would like to take a closer look at the different working methods of the file systems used to help you decide. All file systems have their specific advantages and disadvantages based on the way they work. For perhaps understandable reasons, I won’t go into more detail about tree structures or the like at this point. If you are interested in these specific points, you should consult the relevant technical literature or www.wikipedia.com.
3.2.1 UFS / ext2
The good old ’UNIX File System’ - because that’s what UFS stands for - was developed in the early 1980s and was the standard file system for all UNIX derivatives until the early 1990s. Today, however, it is only used in isolated cases.
However, the basic concept was passed on to the following file system generations:
all data is stored in blocks on the hard drive and
to get to a data block, the address of the memory block is stored in an area called "superblock", which is accessed first by the operating system.
In this way a tree structure is obtained, because each stored file is assigned a specific "inode number".
If a search is made for a specific file within the file system, the entire file tree must always be searched, which can take a comparatively long time for larger file trees with many substructures.
The "second extended file system" (ext2) essentially adopts this structure, but so-called "plugins" - i.e. extensions - can be added to handle fragmentation, compression and recovery of deleted data.
3.2.2 ext3 / ext4
The ext3 and ext4 file systems have evolved from the ext2 with the addition of a so-called journal and the ability to change the size of the file system while the file system is in use.
In a journaling file system, all changes are recorded in a special memory area called journal before the actual write to the selected block takes place. This makes it easier to reconstruct the writes if, for example, the system crashes or the power goes out during the write operation.
Another point of improvement of ext3 resp. ext4 over ext2 was the increase in file system partitions from 16 TB to 32 TB for ext3 and 1 EB (= exabyte) for ext4. Such device sizes could not have been imagined when the UFS was developed.
In addition, there are the extensions regarding the number of files and directories as well as the size of the individual files, which was still limited to 2 TB for ext2, could be between 16 GB and 2 TB for ext3, and is finally only limited by the size of the disk partition for ext4.
3.2.3 xfs
The file system xfs, originally developed by Silicon Graphics (SGI) exclusively for the in-house UNIX system "IRIX", is one of the oldest file systems. But just because something is getting on in years doesn’t mean it has to be "bad". It sets standards with maximum values of 16 EB per file system, a maximum number of