Failover Cluster - Linux

The following refers to ESET PROTECT installation and configuration on a Red Hat high-availability cluster.

Linux Cluster Support

ESET PROTECT Server components can be installed on Red Hat Linux 6 cluster and later. Failover Cluster is only supported in active/passive mode with the cluster manager rgmanager.


Active/passive cluster must be installed and configured. Only one node can be active at a time, other nodes must be on standby. Load balancing is not supported.

Shared storage - iSCSI SAN, NFS and other solutions are supported (any technology or protocol which provides block based or file based access to shared storage, and makes the shared devices appear like locally attached devices to the operating system). Shared storage must be accessible from each active node in the cluster, and the shared file system must be properly initialized (for example, using the EXT3 or EXT4 file system).

The following HA add-ons are required for system management:



rgmanager is the traditional Red Hat HA cluster stack. It is a mandatory component.

The Conga GUI is optional. The Failover Cluster can be managed without it, however we recommend that you install it for best performance. In this guide we assume that it is installed.

Fencing must be properly configured in order to prevent data corruption. The cluster administrator must configure fencing if it is not already configured.

If you do not already have a cluster running, you can use the following guide to set up a high-availability Failover Cluster (active/passive) on Red Hat: Red Hat Enterprise Linux 6 Cluster Administration.


ESET PROTECT components that can be installed on a Red Hat Linux HA cluster:

ESET PROTECT Server with ESET Management Agent



ESET Management Agent must be installed, otherwise the ESET PROTECT cluster service will not run.

Installation of the ESET PROTECT Database on a cluster is supported only when the cluster is provided by the SQL service and ESET PROTECT is connecting to a single database host address.

The following installation example is for a 2-node cluster. However, you can install ESET PROTECT on a multi-node cluster using this example as a reference only. The cluster nodes in this example are named node1 and node2.

Installation steps

1.Install ESET PROTECT Server on node1.

oPlease note that the hostname in the Server certificate must contain the external IP (or hostname) of the cluster’s interface (not local IP or hostname of the node).


2.Stop and disable the ESET PROTECT Server Linux services using the following commands:

service eraserver stop
chkconfig eraserver off

3.Mount shared storage to node1. In this example, the shared storage is mounted to /usr/share/erag2cluster.

4.In /usr/share/erag2cluster, create the following directories:


5.Move recursively the following directories to the destinations shown below (source > destination):

Move folder:

Move to:









6.Create symbolic links (this may require to create new folders manually):

ln -s /usr/share/erag2cluster/etc/opt/eset/RemoteAdministrator/Server /etc/opt/eset/RemoteAdministrator/Server

ln -s /usr/share/erag2cluster/opt/eset/RemoteAdministrator/Server /opt/eset/RemoteAdministrator/Server

ln -s /usr/share/erag2cluster/var/log/eset/RemoteAdministrator/Server /var/log/eset/RemoteAdministrator/Server

ln -s /usr/share/erag2cluster/var/opt/eset/RemoteAdministrator/Server /var/opt/eset/RemoteAdministrator/Server


7.Copy the eracluster_server script found in the setup directory of ESET PROTECT Server to /usr/share/cluster. The scripts do not use extension in the setup directory.

cp /opt/eset/RemoteAdministrator/Server/setup/eracluster_server /usr/share/cluster/

8.Unmount the shared storage from node1.

9.Mount the shared storage to the same directory on node2 as you mounted to on node1 (/usr/share/erag2cluster).

10. On node2, create the following symbolic links:

ln -s /usr/share/erag2cluster/etc/opt/eset/RemoteAdministrator/Server /etc/opt/eset/RemoteAdministrator/Server

ln -s /usr/share/erag2cluster/opt/eset/RemoteAdministrator/Server /opt/eset/RemoteAdministrator/Server

ln -s /usr/share/erag2cluster/var/log/eset/RemoteAdministrator/Server /var/log/eset/RemoteAdministrator/Server

ln -s /usr/share/erag2cluster/var/opt/eset/RemoteAdministrator/Server /var/opt/eset/RemoteAdministrator/Server


11. Copy the eracluster_server script found in the setup director of ESET PROTECT Server to /usr/share/cluster. The scripts do not use the .sh extension in the setup directory.

cp /opt/eset/RemoteAdministrator/Server/setup/eracluster_server /usr/share/cluster/

The next steps are performed in Conga Cluster Administration GUI:

12. Create a Service Group, for example ESMCService.

The ESET PROTECT cluster service requires three resources: IP address, file system and script.

13. Create the necessary service resources.

Add an IP address (external cluster address where Agents will connect), file system and Script resources.

The file system resource should point to the shared storage.

The mount point of the file system resource should be set to /usr/share/erag2cluster.

The "Full Path to Script File" parameter of the Script resource should be set to /usr/share/cluster/eracluster_server .

14. Add the above resources to the ESMCService group.

After the Server cluster is successfully set up, install ESET Management Agent on both nodes on the local disk (not on the shared cluster disk). When using the --hostname= command, you must specify the external IP address or hostname of the cluster's interface (not localhost!).