Install the Siemplify server on each node.

  1. You can find the Siemplify server installer file on the /root/ folder. If it is not located on the root folder, locate the folder that has it.
    You can copy the Siemplify file on both nodes where you can use WinSCP, SCP, Wget CURL or any other program for secure file sharing working on Linux. Use SCP to copy the file using the command:
    SCP {file_directory} username@IP:/root/

For example:
scp D:/siemplify_installer.sh devs@172.30.203.92:/root/ (copied from windows)

  1. Navigate to the folder with the installer and set execution permission for the installation files on both nodes by using the command:
    sudo chmod +x siemplify_installer.sh
  2. Configure the master node for the cluster, database, and file sharing using the command:

sudo bash siemplify_installer.sh --run_mode ha --db_ip {db_vip} --db_port {db_port}(default:5432) --db_username {db_username} --db_password {db_password} --hostname {master_machine_hostname} --ha_host {master_machine_ip},{master_machine_hostname} --ha_host {slave_machine_ip},{slave_machine_hostname} --ha_cluster_vip {app_server_vip} -sf //{shared_folder_ip}/i -su {shared_folder_username} -sp {shared_folder_password}

Configuration Description
{db_vip} IP of the primary database.
{db_port} Port of the primary database.
{db_username} Database username of the primary database.
{db_password} Database password of the primary database.
{master_machine_hostname} Master node hostname.
{master_machine_ip} Master node IP.
{slave_machine_hostname} Slave node hostname.
{slave_machine_ip} Slave node IP.
{app_server_vip} Virtual IP.
i Shared folder name
{shared_folder_ip} Shared folder IP from Samba sharing server.
{shared_folder_username} Username of the user to access the folder.
{shared_folder_password} Password for user trying to access the folder.

For example:
sudo bash siemplify_installer.sh --run_mode ha --db_ip 172.30.203.90 --db_port 5432 --db_username postgres --db_password Password1 --hostname app1 --ha_host 172.30.203.92,app1 --ha_host 172.30.203.93,app2 --ha_cluster_vip 172.30.203.94 -sf //172.30.203.208/siemplifyshare -su siemplifyuser -sp Password1

  1. Configure the slave node for the cluster, database, and file sharing using the following command:
    sudo bash siemplify_installer.sh --run_mode ha --db_ip {db_vip} --db_port {db_port}(default:5432) --db_username {db_username} --db_password {db_password} --hostname {slave_machine_hostname} --ha_host {master_machine_ip},{master_machine_hostname} --ha_host {slave_machine_ip},{slave_machine_hostname} --ha_cluster_vip {app_server_vip} -sf //{shared_folder_ip}/i -su {shared_folder_username} -sp {shared_folder_password}
Configuration Description
{db_vip} IP of the replica database.
{db_port} Port of the replica database.
{db_username} Database username of the replica database.
{db_password} Database password of the replica database.
{master_machine_hostname} Master node hostname.
{master_machine_ip} Master node IP.
{slave_machine_hostname} Slave node hostname.
{slave_machine_ip} Slave node IP.
{app_server_vip} Virtual IP.
i Shared folder name
{shared_folder_ip} Shared folder IP from Samba sharing server.
{shared_folder_username} Username of the user to access the folder.
{shared_folder_password} Password for user trying to access the folder.

For example:
sudo bash siemplify_installer.sh --run_mode ha --db_ip 172.30.203.90 --db_port 5432 --db_username postgres --db_password Password1 --hostname app2 --ha_host 172.30.203.92,app1 --ha_host 172.30.203.93,app2 --ha_cluster_vip 172.30.203.94 -sf //172.30.203.208/siemplifyshare -su siemplifyuser -sp Password1

  1. Check the status of the cluster after installation using the command:
    pcs status

If the services are NOT displayed as in the screenshot above, add them in manually as follows:

pcs resource create webserver ocf:heartbeat:nginx configfile=/etc/nginx/nginx.conf op monitor timeout="5s" interval="5s"
pcs resource create Server_service systemd:Siemplify.Server op monitor interval="10s" timeout="15s"
pcs resource create Connectors_service systemd:Siemplify.Connectors op monitor interval="10s" timeout="15s"
pcs resource create ETL_service systemd:Siemplify.Server.ETL.DataProcessingEngine op monitor interval="10s" timeout="15s"
pcs resource create Indexer_service systemd:Siemplify.Server.Indexer op monitor interval="10s" timeout="15s"
pcs resource create PlaybookActions_service systemd:Siemplify.Server.PlaybookActions op monitor interval="10s" timeout="15s"
pcs resource create PythonExecution_service systemd:Siemplify.Server.PythonExecution op monitor interval="10s" timeout="15s"
pcs constraint order webserver then Server_service --force
pcs constraint colocation add webserver Server_service  INFINITY --force
pcs constraint colocation add Server_service Connectors_service INFINITY --force
pcs constraint colocation add Connectors_service ETL_service INFINITY --force
pcs constraint colocation add ETL_service Indexer_service INFINITY --force
pcs constraint colocation add Indexer_service PlaybookActions_service INFINITY --force
pcs constraint colocation add PlaybookActions_service PythonExecution_service INFINITY --force

Notes:

  • After you successfully install the Siemplify server, you have two nodes and the following eight resources:
    • Cluster_VIP
    • webserver = nginx
    • Server_service
    • Connectors_service
    • ETL_service
    • Indexer_service
    • PlaybookActions_service
    • PythonExecution_service
  • It doesn’t matter which nodes is online.
  • If you connect to the VIP (virtual IP), it connects to the active node.
  • If you connect to the DB VIP, it connects to the active DB.
  • If the resources are not working, you can use the following command to reset:
    pcs resource failcount reset {RESOURCE}

For example:
pcs resource failcount reset webserver

How to switch node manually:

  1. Make sure the node is online:

    If the node is on standby, it looks like the following:
  1. We can unstandby the node with the following command:
    pcs cluster unstandby ha2.siemplify.com
  1. Once both nodes are online, set the active node on standby with the following command:
    pcs cluster standby ha1.siemplify.com

This flow causes the nodes to switch.

If you want to revert back to the previous node, make sure to set the node to unstandby.

Need more help with this?
Click here to open a Support ticket

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.