At times, you might need to decommission a data center. For example, if you are upgrading your operating system, you need to install the new operating system in a new data center and then decommission the old data center. The following sections present an example of decommissioning a data center, in which there are two data centers, dc-1 and dc-2, on a 12-node clustered installation:
- dc-1 is the data center to be decommissioned.
- dc-2 is a second data center, which is used in the decommissioning procedure.
If you are upgrading your operating system, dc-2 could be the data center in which you have installed the new version of the operating system (OS). However, installing a new OS isn't required to decommission a data center.
Considerations before decommissioning a data center
Keep the following considerations in mind when you decommission a data center:
- Block all runtime and management traffic to the data center being decommissioned and redirect them to other data centers.
- After decommissioning the data center, you will have reduced capacity in your Apigee cluster. To make up for it, consider increasing capacity in the remaining data centers or adding data centers after decommissioning.
- During the decommission process, there is a potential for analytics data loss, depending on which analytics components are installed in the data center being decommissioned. You can find more details in Add or remove Qpid nodes.
- Before you decommission a data center, you should understand how all the components are configured across all data centers, especially the OpenLDAP, ZooKeeper, Cassandra, and Postgres servers. You should also take backups of all components and their configurations.
Before you start
- Management Server: All the decommission steps are highly dependent on the Management Server. If you have only one Management Server available, we recommend that you install a new Management Server component on a data center other than dc-1 before decommissioning the Management Server on dc-1, and make sure that one of the Management Servers is always available.
- Router: Before decommissioning a Router, disable reachability of Routers by blocking the port 15999. Ensure no runtime traffic is being directed at the Routers being decommissioned.
Cassandra and ZooKeeper: The sections below describe how to decommission dc-1 in a two data center setup. If you have more than two data centers, make sure to remove all references to the node being decommissioned (dc-1 in this case) from all silent configuration files across all the remaining data centers. For Cassandra nodes that are to be decommissioned, drop those hosts from
CASS_HOSTS
. The remaining Cassandra nodes should remain in the original ordering ofCASS_HOSTS
.Postgres: If you decommission Postgres master, make sure to promote any one of the available standby nodes as a new postgres master. While the QPID server keeps a buffer in the queue, if Postgres master is unavailable for a longer time, you risk losing analytics data.
Prerequisites
Before decommissioning any component, we recommend that you perform a complete backup of all nodes. Use the procedure for your current version of Edge to perform the backup. For more information on backup, see Backup and restore.
Note: If you have multiple Cassandra or ZooKeeper nodes, back them up one at a time, as the backup process temporarily shuts down ZooKeeper.
- Ensure that Edge is up and running before decommissioning, using the command:
/opt/apigee/apigee-service/bin/apigee-all status
- Make sure that no runtime traffic is currently arriving at the data center you are decommissioning.
Order of decommissioning components
If you install Edge for Private Cloud on multiple nodes, you should decommission Edge components on those nodes in the following order:
- Edge UI (edge-ui)
- Management Server (edge-management-server)
- OpenLDAP (apigee-openldap)
- Router (edge-router)
- Message Processor (edge-message-processor)
- Qpid Server and Qpidd (edge-qpid-server and apigee-qpidd)
- Postgres and PostgreSQL database (edge-postgres-server and apigee-postgresql)
- ZooKeeper (apigee-zookeeper)
- Cassandra (apigee-cassandra)
The following sections explain how to decommission each component.
Edge UI
To stop and uninstall the Edge UI component of dc-1, enter the following commands:
/opt/apigee/apigee-service/bin/apigee-service edge-ui stop
/opt/apigee/apigee-service/bin/apigee-service edge-ui uninstall
Management Server
To decommission the Management Server on dc-1, do the following steps:
- Stop the Management Server on dc-1:
apigee-service edge-management-server stop
- Find the UUID of Management Server registered in dc-1:
curl -u <AdminEmailID>:'<AdminPassword>' \ -X GET “http://{MS_IP}:8080/v1/servers?pod=central®ion=dc-1&type=management-server”
- Deregister server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers \ -d "type=management-server®ion=dc-1&pod=central&uuid=UUID&action=remove"
- Delete the server. Note: If other components are also installed on this server,
deregister all of them first before deleting the UUID.
curl -u <AdminEmailID>:'<AdminPassword> -X DELETE http://{MS_IP}:8080/v1/servers/{UUID}
- Uninstall Management Server component on dc-1:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server uninstall
Open LDAP
This section explains how to decommission OpenLDAP on dc-1.
Note: If you have more than two data centers, see Setups with more than two data centers below.
To decommission OpenLDAP on dc-1, do the following steps:
- Back up the dc-1 OpenLDAP node by following the steps in How to back up.
Break the data replication between the two data centers, dc-1 and dc-2, by executing the following steps in both data centers.
- Check the present state:
ldapsearch -H ldap://{HOST}:{PORT} -LLL -x -b "cn=config" -D "cn=admin,cn=config" -w {credentials} -o ldif-wrap=no 'olcSyncRepl' | grep olcSyncrepl
The output should be similar to the following:
olcSyncrepl: {0}rid=001 provider=ldap://{HOST}:{PORT}/ binddn="cn=manager,dc=apigee,dc=com" bindmethod=simple credentials={credentials} searchbase="dc=apigee,dc=com" attrs="*,+" type=refreshAndPersist retry="60 1 300 12 7200 +" timeout=1
- Create a file
break_repl.ldif
containing the following commands:dn: olcDatabase={2}bdb,cn=config changetype: modify delete: olcSyncRepl dn: olcDatabase={2}bdb,cn=config changetype: modify delete: olcMirrorMode
- Run the
ldapmodify
command:ldapmodify -x -w {credentials} -D "cn=admin,cn=config" -H "ldap://{HOST}:{PORT}/" -f path/to/file/break_repl.ldif
The output should be similar to the following:
modifying entry "olcDatabase={2}bdb,cn=config" modifying entry "olcDatabase={2}bdb,cn=config"
- Check the present state:
You can verify that dc-2 is no longer replicating to dc-1 by creating an entry in dc-2 LDAP and ensuring it doesn't show up in LDAP of dc-1.
Optionally, you can follow the steps below, which create a read-only user in the dc-2 OpenLDAP node and then check whether the user is replicated or not. The user is subsequently deleted.
- Create a file
readonly-user.ldif
in dc-2 with following contents:dn: uid=readonly-user,ou=users,ou=global,dc=apigee,dc=com objectClass: organizationalPerson objectClass: person objectClass: inetOrgPerson objectClass: top cn: readonly-user sn: readonly-user userPassword: {testPassword}
- Add user with `ldapadd` command in dc-2:
ldapadd -H ldap://{HOST}:{PORT} -w {credentials} -D "cn=manager,dc=apigee,dc=com" -f path/to/file/readonly-user.ldif
The output will be similar to:
adding new entry "uid=readonly-user,ou=users,ou=global,dc=apigee,dc=com"
- Search for the user in dc-1 to make sure the user is not replicated. If the user
is not present in dc-1, you will be sure that both LDAPs are no longer replicating:
ldapsearch -H ldap://{HOST}:{PORT} -x -w {credentials} -D "cn=manager,dc=apigee,dc=com" -b uid=readonly-user,ou=users,ou=global,dc=apigee,dc=com -LLL
The output should be similar to the following:
No such object (32) Matched DN: ou=users,ou=global,dc=apigee,dc=com
- Remove the read-only user you added previously:
ldapdelete -v -H ldap://{HOST}:{PORT} -w {credentials} -D "cn=manager,dc=apigee,dc=com" "uid=readonly-user,ou=users,ou=global,dc=apigee,dc=com"
- Create a file
- Stop OpenLDAP in dc-1:
/opt/apigee/apigee-service/bin/apigee-service apigee-openldap stop
- Uninstall the OpenLDAP component on dc-1:
/opt/apigee/apigee-service/bin/apigee-service apigee-openldap uninstall
Router
This section explains how to decommission a Router. See Remove a server for more details about removing the Router.
The following steps decommission the Router from dc-1. If there are multiple Router nodes configured in dc-1, perform the steps in all Router nodes one at a time
Note: Here it is assumed that the router's health-check port 15999 is configured in your load balancer, and that blocking port 15999 will make router unreachable. You may need root access to block the port.
To decommission a Router, do the following steps:
Disable the reachability of routers by blocking port 15999, the health check port. Ensure that runtime traffic is blocked on this data center:
iptables -A INPUT -i eth0 -p tcp --dport 15999 -j REJECT
Verify that the router is reachable:
curl -vvv -X GET http://{ROUTER_IP}:15999/v1/servers/self/reachable
The output should be similar to the following:
About to connect() to 10.126.0.160 port 15999 (#0) Trying 10.126.0.160... Connection refused Failed connect to 10.126.0.160:15999; Connection refused Closing connection 0 curl: (7) Failed connect to 10.126.0.160:15999; Connection refused
- Get the UUID of the Router, as described in Get UUIDs.
- Stop the router:
/opt/apigee/apigee-service/bin/apigee-service edge-router stop
- List the available gateway pods in the organization by the following command:
curl -u <AdminEmailID>:<AdminPassword> -X GET "http://{MS_IP}:8080/v1/organizations/{ORG}/pods"
See About Pods.
- Deregister the server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers \ -d "type=router&region=dc-1&pod=gateway-1&uuid=UUID&action=remove"
- Delete the server:
curl -u <AdminEmailID>:'<AdminPassword>’ -X DELETE http://{MS_IP}:8080/v1/servers/UUID
- Uninstall
edge-router
: See Remove a server./opt/apigee/apigee-service/bin/apigee-service edge-router uninstall
- Flush
iptables
rules to enable the blocked port 15999:iptables -F
Message Processor
This section describes how to decommission the Message Processor from dc-1. See Remove a server for more details about removing the Message Processor.
Since we are assuming that dc-1 has a 12-node clustered installation, there are two Message Processor nodes configured in dc-1. Perform the following commands in both the nodes.
- Get the UUIDs of the Message Processors, as described in Get UUIDs.
- Stop the Message Processor:
apigee-service edge-message-processor stop
- Deregister the server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers \ -d "type=message-processor&region=dc-1&pod=gateway-1&uuid=UUID&action=remove"/pre>
- Disassociate an environment from the Message Processor.
Note: You need to remove the bindings on each org/env that associates the Message Processor UUID.
curl -H "Content-Type:application/x-www-form-urlencoded" -u <AdminEmailID>:'
' \ -X POST http://{MS_IP}:8080/v1/organizations/{ORG}/environments/{ENV}/servers \ -d "action=remove&uuid=UUID" - Deregister the server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://MS_IP:8080/v1/servers -d "type=message-processor®ion=dc-1&pod=gateway-1&uuid=UUID&action=remove"
- Uninstall the Message Processor:
/opt/apigee/apigee-service/bin/apigee-service edge-message-processor uninstall
- Deregister the server:
curl -u <AdminEmailID>:'<AdminPassword> -X DELETE http://{MS_IP}:8080/v1/servers/UUID
Qpid server and Qpidd
This section explains how to decommission Qpid Server (edge-qpid-server
) and Qpidd
(apigee-qpidd
).
There are two Qpid nodes configured in dc-1, so you must do the following steps for both
nodes:
- Get the UUID for Qpidd, as described in Get UUIDs.
- Stop
edge-qpid-server
andapigee-qpidd
:/opt/apigee/apigee-service/bin/apigee-service edge-qpid-server stop
/opt/apigee/apigee-service/bin/apigee-service apigee-qpidd stop
- Get a list of Analytics and consumer groups:
curl -u <AdminEmailID>:'<AdminPassword>' -X GET http://{MS_IP}:8080/v1/analytics/groups/ax
- Remove Qpid from the consumer group:
curl -u <AdminEmailID>:'<AdminPassword>' -H "Content-Type: application/json" -X DELETE \ "http://{MS_IP}:8080/v1/analytics/groups/ax/{ax_group}/consumer-groups/{consumer_group}/consumers/{QPID_UUID}"
- Remove Qpid from the analytics group:
curl -v -u <AdminEmailID>:'<AdminPassword>' \ -X DELETE "http://{MS_IP}:8080/v1/analytics/groups/ax/{ax_group}/servers?uuid={QPID_UUID}&type=qpid-server"
- Deregister the Qpid server from the Edge installation:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers \ -d "type=qpid-server®ion=dc-1&pod=central&uuid={QPID_UUID}&action=remove"
- Remove the Qpid server from the Edge installation:
curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE http://{MS_IP}:8080/v1/servers/UUID
- Restart all edge-qpid-server components on all nodes to make sure the change is picked up
by those components:
$ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server restart $ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server wait_for_ready
- Uninstall edge-qpid-server and apigee-qpidd:
$ /opt/apigee/apigee-service/bin/apigee-service edge-qpid-server uninstall $ /opt/apigee/apigee-service/bin/apigee-service apigee-qpidd uninstall
Postgres and Postgresql
The data center you are decommissioning could have a Postgres master or a Postgres standby. The following sections explain how to decommission them:
Decommissioning Postgres master
Note: If you decommission Postgres master, make sure to promote any one of the available standby nodes as a new postgres master. While the QPID queeues buffer data, if the Postgres master is unavailable for a long time, you risk losing analytics data.
To decommission Postgres master:
- Back up the dc-1 Postgres master node by following the instructions in the following links:
- Get the UUIDs of the Postgres servers, as described in Get UUIDs.
- On dc-1, stop
edge-postgres-server
andapigee-postgresql
on the current master:/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
- On the standby node on dc-2, enter the following command to make it the master node:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql promote-standby-to-master <IP of OLD Progress master>
Note: If you have more than one standby Postgres node, you must add host entries on the new master and update the replication setting for all available postgres standby nodes.
To add host entries to the new Postgres master: follow the steps in the appropriate section below:
If there is only one standby node remaining
For example, suppose that before decommissioning, there were three Postgres nodes configured. You decommissioned the existing master and promoted one of the remaining postgres standby nodes to master. Configure the remaining standby node with the following steps:
- On the new master, edit the configuration
file to set:
PG_MASTER=IP_or_DNS_of_new_PG_MASTER PG_STANDBY=IP_or_DNS_of_PG_STANDBY
- Enable replication on the new master:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-replication-on-master -f configFIle
If there is more than one standby node remaining
- Add the following configuration in
/opt/apigee/customer/application/postgresql.properties
:conf_pg_hba_replication.connection=host replication apigee standby_1_ip/32 trust \n host replication apigee standby_2_ip/32 trust
- Ensure the file /opt/apigee/customer/application/postgresql.properties is owned by
apigee user:
chown apigee:apigee /opt/apigee/customer/application/postgresql.properties
- Restart
apigee-postgresql
:apigee-service apigee-postgresql restart
- Modify the configuration file
/opt/silent.conf
and update thePG_MASTER
field with the IP address of the new Postgres master. - Remove any old Postgres data with following command:
rm -rf /opt/apigee/data/apigee-postgresql/
- Set up replication on the standby node:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql setup-replication-on-standby -f configFile
To update replication settings on a standby node:
- On the new master, edit the configuration
file to set:
- Verify that Postgres master is set up correctly by entering the following command in dc-2:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-check-master
- Remove and add Postgresql servers from the analytics group and the consumer group.
- Remove the old Postgres server from the analytics group following the instructions in Remove a Postgres server from an analytics group.
- Add a new postgres server to the analytics group following the instructions in Add an existing Postgres server to an analytics group.
- Deregister the old postgres server from dc-1:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers \ -d "type=postgres-server®ion=dc-1&pod=analytics&uuid=UUID&action=remove"<
- Delete the old postgres server from dc-1:
curl -u >AdminEmailID>:'>AdminPassword>' -X DELETE http://{MS_IP}:8080/v1/servers/UUID
- The old Postgres master is now safe to decommission. Uninstall
edge-postgres-server
andapigee-postgresql
:/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server uninstall /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql uninstall
Decommissioning Postgres standby
Note: The documentation for a 12-node clustered installation shows the dc-1 postgresql node as master, but for convenience, in this section it is assumed that the dc-1 postgresql node is standby and the dc-2 postgresql node is master.
To decommission Postgres standby, do the following steps:
- Get the UUIDs of the Postgres servers, following the instructions in Get UUIDs.
- Stop
apigee-postgresql
on the current standby node in dc-1:/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server stop
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql stop
- Remove and add Postgresql servers from the analytics group and the consumer group.
- Remove the old Postgres server from the analytics group following the instructions in Remove a Postgres server from an analytics group.
- Add a new postgres server to the analytics group following the instructions in Add an existing Postgres server to an analytics group.
- Deregister the old postgres server from dc-1:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://{MS_IP}:8080/v1/servers \ -d "type=postgres-server®ion=dc-1&pod=analytics&uuid=UUID&action=remove"<
- Delete the old postgres server from dc-1:
curl -u >AdminEmailID>:'>AdminPassword>' -X DELETE http://{MS_IP}:8080/v1/servers/UUID
- The old Postgres master is now safe to decommission. Uninstall
edge-postgres-server
andapigee-postgresql
:/opt/apigee/apigee-service/bin/apigee-service edge-postgres-server uninstall /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql uninstall
ZooKeeper and Cassandra
This section explains how to decommission ZooKeeper and Cassandra servers in a two data center setup.
If you have more than two
data centers, make sure to remove all references to the node being decommissioned
(dc-1 in this case)
from all silent configuration files across all the remaining data centers.
For Cassandra nodes that are to be decommissioned, drop those hosts from CASS_HOSTS
.
The remaining Cassandra nodes should remain in the original ordering of CASS_HOSTS
.
Note on ZooKeeper: You must maintain a quorum of voter nodes while modifying the
ZK_HOST
property
in the configuration file, to ensure that the ZooKeeper ensemble stays functional.
You must have an odd number
of voter nodes in your configuration. For more information, see
Apache ZooKeeper maintenance
tasks.
To decommission ZooKeeper and Cassandra servers:
- Back up the dc-1 Cassandra and ZooKeeper nodes by following the instructions in the following links:
List the UUIDs of ZooKeeper and Cassandra servers in the data center where Cassandra nodes are about to be decommissioned.
apigee-adminapi.sh servers list -r dc-1 -p central -t application-datastore --admin <AdminEmailID> --pwd '<AdminPassword>' --host localhost
- Deregister the server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://MS_IP:8080/v1/servers -d "type=cache-datastore&type=user-settings-datastore&type=scheduler-datastore&type=audit-datastore&type=apimodel-datastore&type=application-datastore&type=edgenotification-datastore&type=identityzone-datastore&type=user-settings-datastore&type=auth-datastore®ion=dc-1&pod=central&uuid=UUID&action=remove"
- Deregister the server:
curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE http://MS_IP:8080/v1/servers/UUID
- Update the configuration file with the IPs of decommissioned nodes removed from
ZK_HOSTS
andCASS_HOSTS
.Example: Suppose you have the IPs
$IP1 $IP2 $IP3
in dc-1 and$IP4 $IP5 $IP6
in dc-2, and you are decommissioning dc-1. Then you should remove the IPs$IP1 $IP2 $IP3
from the configuration files.- Existing configuration file entries:
ZK_HOSTS="$IP1 $IP2 $IP3 $IP4 $IP5 $IP6" CASS_HOSTS="$IP1:1,1 $IP2:1,1 $IP3:1,1, $IP4:2,1 $IP5:2,1 $IP6:2,1”
- New configuration file entries:
ZK_HOSTS="$IP4 $IP5 $IP6" CASS_HOSTS="$IP4:2,1 $IP5:2,1 $IP6:2,1"
- Existing configuration file entries:
- Update the silent configuration file (modified in step e) with the IPs of the removed
decommissioned nodes and run the Management
server profile on all nodes hosting Management Servers:
/opt/apigee/apigee-setup/bin/setup.sh -p ms -f updated_config_file
- Update the configuration file with IPs of the removed decommissioned nodes and run the MP/RMP profile
on all Router and Message Processor nodes:
- If Edge Router and Message Processor are configured on the same node, enter:
/opt/apigee/apigee-setup/bin/setup.sh -p rmp -f updated_config_file
If Edge Router and Message Processor are configured on separate nodes, enter the following:
For the Router:
/opt/apigee/apigee-setup/bin/setup.sh -p r -f updated_config_file
For the Message Processor:
/opt/apigee/apigee-setup/bin/setup.sh -p mp -f updated_config_file
- If Edge Router and Message Processor are configured on the same node, enter:
- Reconfigure all Qpid nodes, with the IPs of decommissioned nodes removed from Response File:
/opt/apigee/apigee-setup/bin/setup.sh -p qs -f updated_config_file
- Reconfigure all Postgres nodes, with the IPs of decommissioned nodes removed from Response File:
/opt/apigee/apigee-setup/bin/setup.sh -p ps -f updated_config_file
- Alter
system_auth
keyspace. If you have Cassandra auth enabled on an existing Cassandra node, update the replication factor of thesystem_auth
keyspace by running the following command:ALTER KEYSPACE system_auth WITH replication = {'class': 'NetworkTopologyStrategy', 'dc-2': '3'};
This command sets the replication factor to
'3'
, indicating three Cassandra nodes in the cluster. Modify this value as necessary.After completing this step, the Cassandra topology should not have
dc-1
in any of the keyspaces. - Decommission the Cassandra nodes on dc-1, one by one.
To decommission the Cassandra nodes, enter the following command:
/opt/apigee/apigee-cassandra/bin/nodetool -h cassIP -u cassandra -pw '<AdminPassword>' decommission
- Check the connection of the Cassandra nodes from dc-1 using one of the following commands:
/opt/apigee/apigee-cassandra/bin/cqlsh cassIP 9042 -u cassandra -p '<AdminPassword>'
Or secondary verification command to be run on the decommissioned node:
/opt/apigee/apigee-cassandra/bin/nodetool netstats
The above command should return:
Mode: DECOMMISSIONED
- Run the DS profile for all Cassandra and ZooKeeper nodes in dc-2:
/opt/apigee/apigee-setup/bin/setup.sh -p ds -f updated_config_file
- Stop
apigee-cassandra
andapigee-zookeeper
in dc-1:apigee-service apigee-cassandra stop
apigee-service apigee-zookeeper stop
- Uninstall
apigee-cassandra
andapigee-zookeeper
in dc-1:apigee-service apigee-cassandra uninstall
apigee-service apigee-zookeeper uninstall
Delete the bindings from dc-1
To delete the bindings from dc-1, do the following steps:
- Delete the bindings from dc-1.
- List all the available pods under organization:
curl -v -u <AdminEmailID>:<AdminPassword> -X GET "http://MS_IP:8080/v1/o/ORG/pods"
- To check whether all the bindings have been removed, get the
UUIDs of the servers associated with the pods:
curl -v -u <AdminEmailID>:<AdminPassword> \ -X GET "http://MS_IP:8080/v1/regions/dc-1/pods/gateway-1/servers"
If this command doesn't return any UUIDs, the previous steps have removed all the bindings, and you can skip the next step. Otherwise, perform the next step.
- Remove all the server bindings for the UUIDs obtained in the previous step:
curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE http://MS_IP:8080/v1/servers/UUID
- Disassociate Org from the pod:
curl -v -u <AdminEmailID>:<AdminPassword> "http://MS_IP:8080/v1/o/ORG/pods" -d "action=remove®ion=dc-1&pod=gateway-1" -H "Content-Type: application/x-www-form-urlencoded" -X POST
- List all the available pods under organization:
- Delete the pods:
curl -v -u <AdminEmailID>:<AdminPassword> "http://MS_IP:8080/v1/regions/dc-1/pods/gateway-1" -X DELETE
- Delete the region.
curl -v -u <AdminEmailID>:<AdminPassword> "http://MS_IP:8080/v1/regions/dc-1" -X DELETE
Note: If you miss one of the steps deleting the servers, the above step
will return an error message that a
particular server in the pod still exists. So, delete them by following the troubleshooting steps
below, while customizing the types in the curl
command.
At this point you have completed the decommissioning of dc-1.
Appendix
Troubleshooting
If after performing the previous steps, there are still servers in some pods, do following steps to deregister and delete the servers. Note: Change the types and pod as necessary.
- Get the UUIDs using the following command:
apigee-adminapi.sh servers list -r dc-1 -p POD -t --admin <AdminEmailID> --pwd '<AdminPassword>’ --host localhost
- Deregister server’s type:
curl -u <AdminEmailID>:'<AdminPassword>' -X POST http://MP_IP:8080/v1/servers -d "type=TYPE=REGION=dc-1&pod=POD&uuid=UUID&action=remove"
- Delete the servers one by one:
curl -u <AdminEmailID>:'<AdminPassword>' -X DELETE http://MP_IP:8080/v1/servers/UUID
Validation
You can validate the decommissioning using the following commands.
Management Server
Run the following commands from the Management Servers on all the regions.
curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?pod=central®ion=dc-1 curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?pod=gateway®ion=dc-1 curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/servers?pod=analytics®ion=dc-1
Run the following command on all components to check port requirements for all the management ports.
curl -v http://MS_IP:8080/v1/servers/self
Check the analytics group.
curl -v -u <AdminEmailID>:'<AdminPassword>' "http://MS_IP:8080}/v1/o/ORG/e/ENV/provisioning/axstatus" curl -v -u <AdminEmailID>:'<AdminPassword>' http://MS_IP:8080/v1/analytics/groups/ax
Cassandra/ZooKeeper nodes
On all Cassandra nodes enter:
/opt/apigee/apigee-cassandra/bin/nodetool -h <host> statusthrift
This will return a running
or not running
status for that particular
node.
On one node enter:
/opt/apigee/apigee-cassandra/bin/nodetool -h <host> ring
/opt/apigee/apigee-cassandra/bin/nodetool -h <host> status
The above commands will return active data center information.
On ZooKeeper nodes, first enter:
echo ruok | nc <host> 2181
This command will return imok
.
Then enter:
echo stat | nc <host> 2181 | grep Mode
The value of Mode
returned by the above command will be one of the following:
observer
, leader
, or follower
.
In one ZooKeeper node:
/opt/apigee/apigee-zookeeper/contrib/zk-tree.sh >> /tmp/zk-tree.out.txt
On the Postgres master node, run:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-check-master
Validate that the response says the node is the master.
On the standby node:
/opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-check-standby
Validate that the response says the node is the standby.
Login to PostgreSQL database using the command
psql -h localhost -d apigee -U postgres
When prompted, enter the 'postgres' user password as 'postgres'
.
Select max(client_received_start_timestamp)
from analytics.
”$org.$env.fact” limit 1
;
Logs
Check the logs on the components to make sure there are no errors.