disable services - a little tutorial 15.1 Copying servicefiles 15.2 Use services via YaST 15.3 Operate services with systemctl 16 Sources and disclaimer 16.1 Sources 16.1.1 Internet 16.1.2 Books 16.2 Disclaimer: 17 About… 17.1 …the book… 17.2 …the author… 17.3 Legal notice:
1 Introduction
If you want to build a cluster, sooner or later you face the problem that the data must be usable on all participating servers. This problem can be solved by transporting the data once per minute from the active cluster node to the passive cluster node.
But what, if this "copy job" takes longer than one minute?
In this case, you either have the situation that the copy jobs overtake each other and never end, because the cluster node in question does nothing else but ’copy’ - and ’nothing else’ means ’nothing else’ - or the data is outdated every time.
Neither makes sense and is not desirable.
If you have the additional situation that all cluster nodes must not only read the data but also write it, ’practical copying’ no longer makes any sense at all. Usually, this problem is solved by using a SAN or NAS.
For a data center, where there are usually more than two machines running at 24 * 7 uptime, it may not be a problem to run one more machine per cluster group - this can be a ’disk pot’, known as a true SAN (storage area network), or it can be a network file server, known as a NAS (network-attached-storage).
However, small businesses and home users face the problem of having to pay for a SAN or NAS.
That’s where the DRBD - Distributed Replicated Block Device - product from LinBit (www.linbit.com) comes in. DRBD gives you the ability to connect two or more cluster nodes together without using a SAN or NAS as a data device. DRBD runs like a local RAID controller creating a mirror device (RAID 1) - but with "local disks" connected by a LAN. You can also use this variant in a large data center if your cluster needs to be independent of a SAN or NAS. For example, you can think of a monitoring server that monitors the SAN or NAS and has to run highly available, especially when the SAN or NAS is not running. This cookbook teaches the basics of a DRBD active-passive cluster, extended by further possibilities (three-node cluster, backbone LAN, deployment of DRBD on a Veritas cluster, creation of an own cluster via PERL, cluster configuration via hardware systems and many more) and demonstrates the procedures in the form of ’listings’. All examples are based on a test configuration with OpenSuSE Leap 15.1 (except 6.1.1) and can - with the necessary background knowledge - also be implemented in other Linux distributions. In the text number 6.1.1 the listing is done with SLES 11 SP 4 to show the commands and screen outputs of DRBD version 8 compared to DRBD version 9 because there are some differences. For using DRBD on Windows-Servers use chapter 12.
1.1 Syntax of that book
To distinguish keyboard inputs and on-screen outputs from the explanations, the commands and on-screen outputs are displayed as follows:
Listing 1.1: | example of a session |
hostname:~ # echo "This is an example!"
This is an example!
In the scripts, the individual lines are numbered consecutively and the individual lines are briefly explained in tabular form in the text following the respective listing.
This means that the commands of the "recipes" can be entered on the shell as shown in the examples. The screen output should also be, as shown. The disclaimer (16.2) is explicitly pointed out here, because your systems do not have to match my systems.
1.2 Built-in bugs
In the course of creating this book, I made various mistakes while working out the recipes, which, after careful consideration, I simply took over into the recipe.
The reason for this is that these mistakes can also happen to you, during operation.
In the context of the respective cooking recipe, I then corrected these mistakes again - also to show you how to save the situation, and which factors - not clearly visible at first - had an influence on the respective error situation.
In this way, you can learn from my mistakes to avoid or solve similar mistakes in your systems.
1.3 Hostnames
In an old Siemens-Nixdorf-UNIX-manual, the configuration was explained using hostnamens like Jupiter and Saturn.
Because the dwarf planet pair Pluto and Charon (Charon is the greatest companion of the dwarf planet Pluto) have their common center of gravity, around which they circle, outside of their respective counterparts, these names seemed to me to be suitable to represent a cluster function. Consequently, the second largest moon of Pluto, Nix forms the third host in the three-cluster-node array.
2 Installation
2.1 Software
The DRBD software is provided for the Server or Enterprise editions starting with the following Linux distributions and is updated accordingly (as of summer 2020):
Red Hat Enterprise Linux (RHEL), versions 6, 7 and 8
SUSE Linux Enterprise Server (SLES), versions 11 SP4, 12 and 15
Debian GNU/Linux, 8 (jessie), and 9 (stretch)
Ubuntu Server Edition LTS 14.04 (Trusty Tahr), LTS 16.04 (Xenial Xerus), and LTS 18.04 (Bionic Beaver)
In addition, OpenSuSE provides the DRBD packages from version Leap 42.1.
When using the command zypper, it looks like this (the output lines habe been shortened because the type is package in all cases):
Listing 2.1: | zypper search drbd |
pluto:~ # zypper search drbd
Loading repository data...
Reading installed packages...
S | Name | Summary |
--+---------------------------+------------------------------------------------------------+-
| drbd | Linux driver for the "Distributed␣Replicated␣Block␣Device" |
| drbd-formula | DRBD deployment salt formula |
| drbd-kmp-default | Kernel driver |
| drbd-kmp-preempt | Kernel driver |