Adding Cassandra nodes

When adding Cassandra nodes to a cluster, it is essential to consider the following two key points:

  • The existing positions of nodes in the Cassandra ring should not change to minimize streaming and maintain a balanced ring.
  • The number of nodes in all data centers must remain consistent.

To ensure the first objective, it is crucial to double the number of nodes in the Cassandra cluster each time you add new nodes.

For example, if you start with a standard 12-node cluster installation topology distributed across two data centers, you will have a total of six Cassandra nodes—three in each data center. To expand this cluster, you should add three nodes to each data center, increasing the total node count to 12 (six nodes in each data center). If further expansion is required, you should add six additional nodes to each data center, resulting in a total node count of 24 (12 nodes in each data center).

This document provides instructions for adding three new Cassandra nodes to an existing Edge for Private Cloud installation. The same steps can be followed to add additional nodes. Always ensure that when expanding your cluster, you double the number of nodes.

For a list of the system requirements for a Cassandra node, refer to the Installation Requirements section.

Existing Edge configuration

All the supported Edge topologies for a production system specify to use three Cassandra nodes. The three nodes are specified to the CASS_HOSTS property in the config file as shown below:

HOSTIP=$(hostname -i)
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1" 

Note that the REGION property specifies the region name as "dc-1". You need that information when adding the new Cassandra nodes.

Modifying the config file to add the three new Cassandra nodes

In this example, the three new Cassandra nodes are at the following IP addresses:


You must first update Edge configuration file to add the new nodes:

# Add the new node IP addresses.
HOSTIP=$(hostname -i)
# Update CASS_HOSTS to add each new node after an existing nodes. 
# Must use IP addresses for CASS_HOSTS, not DNS names.
CASS_HOSTS="$IP1:1,1 $IP14:1,1 $IP2:1,1 $IP15:1,1 $IP3:1,1 $IP16:1,1" 

This ensure that the existing nodes retain their initial token settings, and the initial token of each new node is between the token values of the existing nodes.

Configure Edge

After editing the config file, you must:

  • Reconfigure the existing Cassandra nodes
  • Install Cassandra on the new nodes
  • Reconfigure the Management Server

Reconfigure the existing Cassandra nodes

On the existing Cassandra nodes:

  1. Rerun the with the "-p c" profile and the new config file:
    /opt/apigee/apigee-setup/bin/ -p c -f updatedConfigFile

Install Cassandra on the new nodes

Use the procedure below to install Cassandra on the new nodes.

On each new Cassandra node:

  1. Install Cassandra on the three nodes:
    1. Install apigee-setup on the first node as described in Install the Edge apigee-setup utility.
    2. Install Cassandra on the first node by using the updated config file:
      /opt/apigee/apigee-setup/bin/ -p c -f updatedConfigFile
    3. Repeat these two steps for the remaining new Cassandra nodes.
  2. Rebuild the three new Cassandra nodes, specifying the region name to be the data center in which you are adding the node (dc-1, dc-2, and so on). In this example, it is dc-1:
    1. On the first node, run:
      /opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h nodeIP rebuild dc-1

      Where nodeIP is the IP address of the Cassandra node.

      You only need to pass your username and password if you enabled JMX authentication for Cassandra.

    2. Repeat this step on the remaining new Cassandra nodes.

Reconfigure the Management Server

On a Management-Server node

  1. Rerun to update the Management Server for the newly added Cassandra nodes:
    /opt/apigee/apigee-setup/bin/ -p ms -f updatedConfigFile

Restart all Routers and Message Processors

  1. On all Routers:
    /opt/apigee/apigee-service/bin/apigee-service edge-router restart
  2. On all Message Processors:
    /opt/apigee/apigee-service/bin/apigee-service edge-message-processor restart

Free disk space on existing Cassandra nodes

After you add a new node, you can use the nodetool cleanup command on the pre-existing nodes to free up disk space. This command clears configuration tokens that are no longer owned by the pre-existing Cassandra node.

To free up disk space on pre-existing Cassandra nodes after adding a new node, execute the following command:

/opt/apigee/apigee-cassandra/bin/nodetool [-u username -pw password] -h cassandraIP cleanup

You only need to pass your username and password if you enabled JMX authentication for Cassandra.

Verify rebuild

Use the following commands to verify that the rebuild was successful:

nodetool [-u username -pw password] -h nodeIP netstats

This command should indicate MODE: Normal when the node is up and the indexes are built.

nodetool [-u username -pw password] -h nodeIP statusthrift

Should indicate that the thrift server is running, which allows Cassandra to accept new client requests.

nodetool [-u username -pw password] -h nodeIP statusbinary

Should indicate that the native transport (or binary protocol) is running.

nodetool [-u username -pw password] -h nodeIP describecluster

Should show new nodes are using the same schema version as the older nodes.

For more information on using nodetool, see the nodetool usage documentation.