Installing On a Cluster

Before You Begin

Before you set up the nodes in a cluster, you should have already configured a cache server, as described in Setting Up a Cache Server. The cluster will require the presence of a cache server to cache data that should be available to all nodes in the cluster. If your cache server isn't configured and running, you won't be able to set up the cluster.

Note: Your license determines whether or not clustering is enabled and how many nodes are supported. To check on the number of clustered servers your license allows, see the license information after logging into the Admin Console.

Topology

The nodes in a cluster need to be installed on the same subnet, and preferably on the same switch. You cannot install nodes in a cluster across a WAN.

Upgrading

Starting a New Cluster

Always wait for the first node in the cluster to be up and running with clustering enabled before you start other cluster nodes. Waiting a minute or more between starting each node ensures the nodes are not in competition. As the senior member, the first node you start has a unique role in the cluster. See the clustering overview for more information.

Clocks

Cluster Node Communication

Overview of a Cluster Installation

  1. Be sure to read the System Requirements for important information about software, hardware, and network requirements and recommendations.
  2. Provision a database server. Be sure to read the Database Prerequisites.
  3. If you're going to use a separate server for binary storage, Configure a Binary Storage Provider.
  4. If your community will use the document conversion feature, see Setting Up a Document Conversion.
  5. Install a cache server on a separate server.
  6. Install and configure the application on the first node in your cluster.
  7. Install and configure the application on the subsequent nodes in your cluster.

Installing on a Cluster

Important: If, as part of your new installation, you're setting up one node as a template, then copying the home directory (such as /usr/local/jive/applications/your_instance_name/home) to other nodes in the cluster, you must remove the node.id file and the crypto directory from the home directory before starting the server. The application will correctly populate them.
  1. Use the Jive application package to set up a cache server on a separate machine. See Setting Up a Cache Server for more information. Note the cache server address for use in setting up the application servers.
  2. Before proceeding, make sure the cache server you set up is running. It must be running while you set up the application server nodes.
  3. On each node in the cluster, install the application using the package (RPM on Linux), but don't run the Admin Console's setup wizard.

    See the Linux installation instructions for more information on installing the application.

  4. Start the primary node and navigate to its instance with a web browser. In the setup screen provided, enter the address of the cache server you installed, then complete the Admin Console setup wizard.
  5. After you've finished with the setup wizard, restart the node.
  6. Copy the jive.license file, the jive_startup.xml file, and the search folder from the home directory on the primary node to the home directory in each of the other nodes in the cluster. The home directory is typically found here: /usr/local/jive/applications/your_instance_name/home.
  7. On each of the secondary nodes, remove the node.id file and the crypto directory from the home directory. (The application will correctly populate these on each node when they are started for the first time.)
  8. Start the application on each of the secondary nodes (service jive-application start followed by service jive-httpd start). Because they are connecting to the same database used by the primary server, each secondary node will detect that clustering is enabled and will pick up the configuration you set on the primary node.
  9. Restart all the servers in the cluster to ensure that the address of each node in the cluster is known to all the other nodes.