Apigee supports upgrading Edge for Private Cloud directly from version 4.52.02 to version 4.53.00. This page describes how to perform such upgrades.
For an overview of compatible upgrade paths, see the upgrade compatibility matrix for Edge for Private Cloud releases.
Who can perform the update
The person running the update should be the same as the person who originally installed Edge, or a person running as root.
After you install the Edge RPMs, anyone can configure them.
Which components must you update
You must update all Edge components. Edge does not support a setup that contains components from multiple versions.
Update prerequisites
Make sure of the following prerequisites before upgrading Apigee Edge:
- Backup all nodes
Before you update, we recommend that you perform a complete backup of all nodes for safety reasons. Use the procedure for your current version of Edge to perform the backup.This allows you to have a backup plan, in case the update to a new version doesn't function properly. For more information on backup, see Backup and Restore.
- Ensure Edge is running
Ensure that Edge is up and running during the update process by using the command:/opt/apigee/apigee-service/bin/apigee-all status
- Verify Cassandra prerequisites
If you previously upgraded from an older version of Edge for Private Cloud to version 4.52.02 and are now planning to upgrade to version 4.53.00, make sure you have completed the required post-upgrade steps for Cassandra. These steps are outlined in the version 4.52.02 upgrade documentation under Post upgrade steps. If you are unsure whether these steps were completed during the previous upgrade, complete them again before proceeding with the upgrade to version 4.53.00. - Configuring IDP keys and certificates in Edge for Private Cloud 4.53.00
In Edge for Private Cloud 4.53.00, IDP keys and certificates used in the
apigee-sso
component are now configured via a keystore. You will need to export the key and certificate you previously used into a keystore. Follow the steps in the Steps for updating Apigee SSO from older versions section for detailed steps before updating the SSO component. - Python requirements
Ensure that all nodes, including Cassandra nodes, have Python 3 installed before attempting the upgrade.
Automatic propagation of property settings
If you have set any properties by editing .properties
files in /opt/apigee/customer/application
, then these values are retained by the update.
Required upgrade to Cassandra 4.0.13
Apigee Edge for Private Cloud 4.53.00 includes an upgrade of Cassandra to version 4.0.13.
Upgrades and rollback
- Upgrading from Cassandra 3.11.X to Cassandra 4.0.X is a smooth process. Cassandra 4.0.X, released with Edge for Private Cloud 4.53.00, is compatible with the runtime and management components of Private Cloud 4.52.02.
- Direct in-place rollback from Cassandra 4.0.X to 3.11.X is not possible. Rolling back using replicas or backups is a complex procedure and may involve downtime and/or data loss. Troubleshooting issues and upgrading to Cassandra 4.0.X is preferable to rolling back.
- It is important to familiarize yourself with rollback procedures before attempting the upgrade. Considering the nuances of rollback during the upgrade is critical to ensure appropriate rollback paths are available.
Single data center
Upgrading Cassandra from 3.11.X to 4.0.X within a single data center is seamless, but rollback is complex and may result in downtime and data loss. For production workloads, it's strongly advised to add a new data center with at least Cassandra nodes available in the new data center before initiating the upgrade. This will enable rollback of Cassandra without incurring data loss or disruption to your API traffic. This additional data center can be decommissioned once the upgrade is finished or Checkpoint 2 is reached.
If adding a new data center isn't feasible but rollback capability is still desired, backups will be necessary for restoring Cassandra 3.11.X. However, this method is likely to involve both downtime and data loss.
Multiple data centers
Operating multiple data centers with Edge for Private Cloud 4.52.02 offers more flexibility for rollbacks during the upgrade to Edge for Private Cloud 4.53.00.
- Rollbacks depend on having at least one data center running the older Cassandra version (3.11.X).
- If your entire Cassandra cluster is upgraded to 4.0.X, you must not roll back to Cassandra 3.11.X. You must continue using the newer Cassandra version with the other components of Private Cloud 4.53.00 or 4.52.02.
Recommended upgrade methodology
- Upgrade one Cassandra data center at a time: Start by upgrading Cassandra nodes individually within a single data center. Complete upgrades of all Cassandra nodes in one data center before proceeding to the next.
- Pause and validate: After upgrading one data center, pause to ensure your Private Cloud cluster, especially the upgraded data center, is functioning correctly.
- Remember: You can only roll back to the previous Cassandra version if you have at least one data center still running the older version.
- Time-sensitive: While you can pause for a short period (a few hours is recommended) to validate functionality, you cannot remain in a mixed-version state indefinitely. This is because a non-uniform Cassandra cluster (with nodes on different versions) has operational limitations.
- Thorough testing: Apigee strongly recommends comprehensive testing of performance and functionality before upgrading the next data center. Once all data centers are upgraded, rollback to the earlier version is impossible.
Rollback as a two-checkpoint process
- Checkpoint 1: The initial state, with all components on version 4.52.02. Full rollback is possible as long as at least one Cassandra data center remains on the older version.
- Checkpoint 2: After all Cassandra nodes in all data centers are updated. You can roll back to this state, but you cannot revert to Checkpoint 1.
Example
Consider a two-data-center (DC) cluster:
- Start state: Cassandra nodes in both DCs are on version 3.11.X. All other nodes are on Edge for Private Cloud version 4.52.02. Assume three Cassandra nodes per DC.
- Upgrade DC-1: Upgrade the three Cassandra nodes in DC-1 one by one.
- Pause and validate: Pause to ensure the cluster, particularly DC-1, is working correctly (check performance, functionality). You can roll back to the initial state using the Cassandra nodes in DC-2. Remember, this pause must be temporary due to the limitations of a mixed-version Cassandra cluster.
- Upgrade DC-2: Upgrade the remaining three Cassandra nodes in DC-2. This becomes your new rollback checkpoint.
- Upgrade other components: Upgrade management, runtime, and analytics nodes as usual across all data centers, one node and one data center at a time. If issues arise, you can roll back to the state of step 4.
Prerequisites for Cassandra upgrade
You should be running Cassandra 3.11.16 with Edge for Private Cloud 4.52.02 and ensure the following:- The entire cluster is operational and fully functional with Cassandra 3.11.16.
- The compaction strategy is set to
LeveledCompactionStrategy
(a prerequisite for the upgrade to version 4.52.02). - All post-upgrade steps from the initial upgrade to Cassandra 3.11.16 as part of the 4.52.02 upgrade have been completed. If not, rerun these steps. This applies only if you upgraded to Private Cloud version 4.52.02 from an older version.
Step 1: Prepare for upgrade
The steps below are in addition to standard files that you typically create, such as Apigee’s standard configuration file for enabling component upgrades.
- Backup Cassandra using Apigee.
- Take VM snapshots of Cassandra nodes (if feasible).
- Ensure that port 9042 is accessible from all Edge for Private Cloud components, including Management Server, Message Processor, Router, Qpid, and Postgres, to Cassandra nodes if not already configured. Refer to the Port requirements for more information.
Step 2: Upgrade all Cassandra nodes
All Cassandra nodes should be updated one by one in each data center, one data center at a time. Between upgrades of nodes within a data center, wait a few minutes to ensure that an updated node has fully started and joined the cluster before proceeding with upgrading another node in the same data center.
After upgrading all Cassandra nodes within a data center, wait for some time (30 minutes to a few hours) before proceeding with the nodes in the next data center. During this time, thoroughly review the data center that was updated and ensure that the functional and performance metrics of your Apigee cluster are intact. This step is crucial to ensure the stability of the data center where Cassandra has been upgraded to version 4.0.X, while the rest of the Apigee components remain on version 4.52.02.
-
To upgrade a Cassandra node, Run the following command:
/opt/apigee/apigee-setup/bin/update.sh -c cs -f configFile
-
Once a node is updated, run the following command on the node to run some validations before proceeding ahead:
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra validate_upgrade -f configFile
-
The above will output something along the lines of:
Cassandra version is verified - [cqlsh 6.0.0 | Cassandra 4.0.13 | CQL spec 3.4.5 | Native protocol v5] Metadata is verified
Step 3: Upgrade all Management nodes
Upgrade all Management nodes in all regions one by one:
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
Step 4: Upgrade all Runtime nodes
Upgrade all Routers and Message Processor nodes in all regions one by one:
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
Step 5: Upgrade all remaining Edge for Private Cloud 4.53.00 components
Upgrade all remaining edge-qpid-server
and edge-postgres-server
nodes in all regions one by one.
Step 6: Post upgrade steps
Run the following command on each Cassandra node one by one after the upgrade is complete:
/opt/apigee/apigee-service/bin/apigee-service apigee-cassandra post_upgrade
Steps for updating Apigee SSO from older versions
In Edge for Private Cloud 4.53.00, the IDP keys and certificates used in the apigee-sso
component are now configured through a keystore. You will need to export the key and certificate used earlier into a keystore, configure it, and then proceed with the SSO update as usual.
-
Identify the existing key and certificate used for configuring IDP:
-
Retrieve the certificate by looking up the value of SSO_SAML_SERVICE_PROVIDER_CERTIFICATE in the SSO installation configuration file or by querying the
apigee-sso
component for conf_login_service_provider_certificate.Use the following command on the SSO node to query
apigee-sso
for the IDP certificate path. In the output, look for the value in the last line.apigee-service apigee-sso configure -search conf_login_service_provider_certificate
-
Retrieve the key by looking up the value of SSO_SAML_SERVICE_PROVIDER_KEY in the SSO installation configuration file or by querying the
apigee-sso
component for conf_login_service_provider_key.Use the following command on the SSO node to query
apigee-sso
for the IDP key path. In the output, look for the value on the last line.apigee-service apigee-sso configure -search conf_login_service_provider_key
-
-
Export the key and certificate to a keystore:
-
Export the key and certificate to a PKCS12 keystore:
sudo openssl pkcs12 -export -clcerts -in <certificate_path> -inkey <key_path> -out <keystore_path> -name <alias>
Parameters:
certificate_path
: Path to the certificate file retrieved in Step 1.a.key_path
: Path to the private key file retrieved in Step 1.b.keystore_path
: Path to the newly created keystore containing the certificate and private key.alias
: Alias used for the key and certificate pair within the keystore.
Refer to the OpenSSL documentation for more details.
-
(Optional) Export the key and certificate from PKCS12 to a JKS keystore:
sudo keytool -importkeystore -srckeystore <PKCS12_keystore_path> -srcstoretype PKCS12 -destkeystore <destination_keystore_path> -deststoretype JKS -alias <alias>
Parameters:
PKCS12_keystore_path
: Path to the PKCS12 keystore created in Step 2.a, containing the certificate and key.destination_keystore_path
: Path to the new JKS keystore where the certificate and key will be exported.alias
: Alias used for the key and certificate pair within the JKS keystore.
Refer to the keytool documentation for more details.
-
Export the key and certificate to a PKCS12 keystore:
- Change the owner of the output keystore file to the "apigee" user:
sudo chown apigee:apigee <keystore_file>
-
Add the following properties in Apigee SSO configuration file and update them with the keystore file path, password, keystore type, and alias:
# Path to the keystore file SSO_SAML_SERVICE_PROVIDER_KEYSTORE_PATH=${APIGEE_ROOT}/apigee-sso/source/conf/keystore.jks # Keystore password SSO_SAML_SERVICE_PROVIDER_KEYSTORE_PASSWORD=Secret123 # Password for accessing the keystore # Keystore type SSO_SAML_SERVICE_PROVIDER_KEYSTORE_TYPE=JKS # Type of keystore, e.g., JKS, PKCS12 # Alias within keystore that stores the key and certificate SSO_SAML_SERVICE_PROVIDER_KEYSTORE_ALIAS=service-provider-cert
-
Update Apigee SSO software on the SSO node as usual using the following command:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f /opt/silent.conf
New Edge UI
This section lists considerations regarding the Edge UI. For more information, see The new Edge UI for Private Cloud.
Install the Edge UI
After you complete the initial installation, Apigee recommends that you install the Edge UI, which is an enhanced user interface for developers and administrators of Apigee Edge for Private Cloud.
Note that the Edge UI requires that you disable Basic authentication and use an IDP such as SAML or LDAP.
For more information, see Install the new Edge UI.
Update with Apigee mTLS
To update Apigee mTLS , do the following steps:
Rolling back an update
In the case of an update failure, you can try to correct the issue, and then execute
update.sh
again. You can run the update multiple times and it continues the update
from where it last left off.
If the failure requires that you roll back the update to your previous version, see Roll back 4.53.00 for detailed instructions.
Logging update information
By default, the update.sh
utility writes log information to:
/opt/apigee/var/log/apigee-setup/update.log
If the person running the update.sh
utility does not have access to
that directory, it writes the log to the /tmp
directory as a file named
update_username.log
.
If the person does not have access to /tmp
, the update.sh
utility
fails.
Zero-downtime update
A zero-downtime update, or rolling update, lets you update your Edge installation without bringing down Edge.
Zero-downtime update is only possible with a 5-node configuration and larger.
The key to zero-downtime upgrading is to remove each Router, one at a time, from the load balancer. You then update the Router and any other components on the same machine as the Router, and then add the Router back to the load balancer.
- Update the machines in the correct order for your installation as described Order of machine update.
- When it is time to update the Routers, select any one Router and make it unreachable, as described in Enabling/Disabling server (Message Processor/Router) reachability.
- Update the selected Router and all other Edge components on the same machine as the Router. All Edge configurations show a Router and Message Processor on the same node.
- Make the Router reachable again.
- Repeat steps 2 through 4 for the remaining Routers.
- Continue the update for any remaining machines in your installation.
Take care of the following before and after the update:
- On combined Router and Message Processor node:
- Before update – perform the following:
- Make the Router unreachable.
- Make the Message Processor unreachable.
- After update – perform the following:
- Make the Message Processor reachable.
- Make the Router reachable.
- Before update – perform the following:
- On single Router nodes:
- Before update, make the Router unreachable.
- After update, make the Router reachable.
- On single Message Processor nodes:
- Before update, make the Message Processor unreachable.
- After update, make the Message Processor reachable.
Use a silent configuration file
You must pass a silent configuration file to the update command. The silent configuration file should be the same one that you used to install Edge for Private Cloud 4.52.02.
Update to 4.53.00 on a node with an external internet connection
Use the following procedure to update the Edge components on a node:
- If present, disable any
cron
jobs configured to perform a repair operation on Cassandra until after the update completes. - Log in to your node as root to install the Edge RPMs.
- Disable SELinux as described in Install the Edge apigee-setup utility.
- If you are installing on AWS, execute the following
yum-configure-manager
commands:yum update rh-amazon-rhui-client.noarch
sudo yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional
If you later decide to roll back the update, use the procedure described in Roll back 4.53.00.
Update to 4.53.00 from a local repo
If your Edge nodes are behind a firewall, or in some other way are prohibited from accessing the Apigee repository over the Internet, then you can perform the update from a local repository, or mirror, of the Apigee repo.
After you create a local Edge repository, you have two options for updating Edge from the local repo:
- Create a .tar file of the repo, copy the .tar file to a node, and then update Edge from the .tar file.
- Install a webserver on the node with the local repo so that other nodes can access it. Apigee provides the Nginx webserver for you to use, or you can use your own webserver.
To update from a local 4.53.00 repo:
- Create a local 4.53.00 repo as described in "Create a local Apigee repository" at Install the Edge apigee-setup utility.
- To install apigee-service from a .tar file:
- On the node with the local repo, use the following command to package the local repo
into a single .tar file named
/opt/apigee/data/apigee-mirror/apigee-4.53.00.tar.gz
:/opt/apigee/apigee-service/bin/apigee-service apigee-mirror package
- Copy the .tar file to the node where you want to update Edge. For example, copy it to
the
/tmp
directory on the new node. - On the new node, untar the file to the
/tmp
directory:tar -xzf apigee-4.53.00.tar.gz
This command creates a new directory, named
repos
, in the directory containing the .tar file. For example/tmp/repos
. - Install the Edge
apigee-service
utility and dependencies from/tmp/repos
:sudo bash /tmp/repos/bootstrap_4.53.00.sh apigeeprotocol="file://" apigeerepobasepath=/tmp/repos
Notice that you include the path to the repos directory in this command.
- On the node with the local repo, use the following command to package the local repo
into a single .tar file named
- To install apigee-service using the Nginx webserver:
- Configure the Nginx web server as described in "Install from the repo using the Nginx webserver" at Install the Edge apigee-setup utility.
- On the remote node, download the Edge
bootstrap_4.53.00.sh
file to/tmp/bootstrap_4.53.00.sh
:/usr/bin/curl http://uName:pWord@remoteRepo:3939/bootstrap_4.53.00.sh -o /tmp/bootstrap_4.53.00.sh
Where uName:pWord are the username and password you set previously for the repo, and remoteRepo is the IP address or DNS name of the repo node.
- On the remote node, install the Edge
apigee-setup
utility and dependencies:sudo bash /tmp/bootstrap_4.53.00.sh apigeerepohost=remoteRepo:3939 apigeeuser=uName apigeepassword=pWord apigeeprotocol=http://
Where uName:pWord are the repo username and password.
- Use
apigee-service
to update theapigee-setup
utility, as the following example shows:/opt/apigee/apigee-service/bin/apigee-service apigee-setup update
- Update the
apigee-validate
utility on the Management Server, as the following example shows:/opt/apigee/apigee-service/bin/apigee-service apigee-validate update
- Update the
apigee-provision
utility on the Management Server, as the following example shows:/opt/apigee/apigee-service/bin/apigee-service apigee-provision update
- Run the
update
utility on your nodes in the order described in Order of machine update:/opt/apigee/apigee-setup/bin/update.sh -c component -f configFile
Where:
- component is the Edge component to update. You typically update the
following components:
cs
: Cassandraedge
: All Edge components except Edge UI: Management Server, Message Processor, Router, QPID Server, Postgres Serverldap
: OpenLDAPps
: postgresqlqpid
: qpiddsso
: Apigee SSO (if you installed SSO)ue
New Edge UIui
: Classic Edge UIzk
: Zookeeper
- configFile is the same configuration file that you used to define your Edge components during the 4.50.00 or 4.51.00 installation.
You can run
update.sh
against all components by setting component to "all", but only if you have an Edge all-in-one (AIO) installation profile. For example:/opt/apigee/apigee-setup/bin/update.sh -c all -f /tmp/sa_silent_config
- component is the Edge component to update. You typically update the
following components:
- Restart the UI components on all nodes running it, if you haven't done so already:
/opt/apigee/apigee-service/bin/apigee-service [edge-management-ui|edge-ui] restart
- Test the update by running the
apigee-validate
utility on the Management Server, as described in Test the install.
If you later decide to roll back the update, use the procedure described in Roll back 4.53.00.
Order of machine update
The order that you update the machines in an Edge installation is important:
- You must update all Cassandra and ZooKeeper nodes before you update any other nodes.
- For any machine with multiple Edge components (Management Server, Message Processor,
Router, QPID Server but not Postgres Server), use the
-c edge
option to update them all at the same time. - If a step specifies that it should be performed on multiple machines, perform it in the specified machine order.
- There is no separate step to update Monetization. It is updated when you specify the
-c edge
option.
1-node standalone upgrade
To upgrade a 1-node standalone configuration to 4.53.00:
- Update all components:
/opt/apigee/apigee-setup/bin/update.sh -c all -f configFile
- (If you installed
apigee-adminapi
) Update theapigee-adminapi
utility:/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
2-node standalone upgrade
Update the following components for a 2-node standalone installation:
See Installation topologies for the list of Edge topologies and node numbers.
- Update Cassandra and ZooKeeper on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
- Update Postgres on machine 2:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update LDAP on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
- Update Edge components on machine 2 and 1:
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
- Update Qpid on Machine 2:
/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update the UI on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c ui -f configFile
- (If you installed
apigee-adminapi
) Updated theapigee-adminapi
utility on machine 1:/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
- (If you installed Apigee SSO) Update Apigee SSO on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
Where sso_config_file is the configuration file you created when you installed SSO.
- Restart the Edge UI component on machine 1:
/opt/apigee/apigee-service/bin/apigee-service edge-ui restart
5-node upgrade
Update the following components for a 5-node installation:
See Installation topologies for the list of Edge topologies and node numbers.
- Update Cassandra and ZooKeeper on machine 1, 2, and 3:
/opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
- Update Postgres on machine 4:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update Postgres on machine 5:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update LDAP on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
- Update Edge components on machine 4, 5, 1, 2, 3:
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
- Update Qpid on machine 4:
/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update Qpid on machine 5:
/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update the Edge UI:
- Classic UI: If you are using the classic UI, then update the
ui
component on machine 1, as the following example shows:/opt/apigee/apigee-setup/bin/update.sh -c ui -f configFile
- New Edge UI: If you installed the new Edge UI, then update the
ue
component on the appropriate machine (may not be machine 1):/opt/apigee/apigee-setup/bin/update.sh -c ue -f /opt/silent.conf
- Classic UI: If you are using the classic UI, then update the
- (If you installed
apigee-adminapi
) Updated theapigee-adminapi
utility on machine 1:/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
- (If you installed Apigee SSO) Update Apigee SSO on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
Where sso_config_file is the configuration file you created when you installed SSO.
- Restart the UI component:
- Classic UI: If you are using the classic UI, then restart the
edge-ui
component on machine 1, as the following example shows:/opt/apigee/apigee-service/bin/apigee-service edge-ui restart
- New Edge UI: If you installed the new Edge UI, then restart the
edge-management-ui
component on the appropriate machine (may not be machine 1):/opt/apigee/apigee-service/bin/apigee-service edge-management-ui restart
- Classic UI: If you are using the classic UI, then restart the
9-node clustered upgrade
Update the following components for a 9-node clustered installation:
See Installation topologies for the list of Edge topologies and node numbers.
- Update Cassandra and ZooKeeper on machine 1, 2, and 3:
/opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
- Update Postgres on machine 8:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update Postgres on machine 9:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update LDAP on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
- Update Edge components on machine 6, 7, 8, 9, 1, 4, and 5 in that order:
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
- Update Qpid on machines 6 and 7:
/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update either the new UI (
ue
) or classic UI (ui
) on machine 1:/opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
- (If you installed
apigee-adminapi
) Update theapigee-adminapi
utility on machine 1:/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
- (If you installed Apigee SSO) Update Apigee SSO on machine 1:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
Where sso_config_file is the configuration file you created when you installed SSO.
- Restart the UI component:
- Classic UI: If you are using the classic UI, then restart the
edge-ui
component on machine 1, as the following example shows:/opt/apigee/apigee-service/bin/apigee-service edge-ui restart
- New Edge UI: If you installed the new Edge UI, then restart the
edge-management-ui
component on the appropriate machine (may not be machine 1):/opt/apigee/apigee-service/bin/apigee-service edge-management-ui restart
- Classic UI: If you are using the classic UI, then restart the
13-node clustered upgrade
Update the following components for a 13-node clustered installation:
See Installation topologies for the list of Edge topologies and node numbers.
- Update Cassandra and ZooKeeper on machines 1, 2, and 3:
/opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
- Update Postgres on machine 8:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update Postgres on machine 9:
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Update LDAP on machine 4 and 5:
/opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
- Update Edge components on machines 12, 13, 8, 9, 6, 7, 10, and 11 in that order:
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
- Update Qpid on machines 12 and 13:
/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update either the new UI (
ue
) or classic UI (ui
) on machines 6 and 7:/opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
- (If you installed
apigee-adminapi
) Updated theapigee-adminapi
utility on machines 6 and 7:/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
- (If you installed Apigee SSO) Update Apigee SSO on machines 6 and 7:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
Where sso_config_file is the configuration file you created when you installed SSO.
- Restart the UI component:
- Classic UI: If you are using the classic UI, then restart the
edge-ui
component on machines 6 and 7, as the following example shows:/opt/apigee/apigee-service/bin/apigee-service edge-ui restart
- New Edge UI: If you installed the new Edge UI, then restart the
edge-management-ui
component on machines 6 and 7:/opt/apigee/apigee-service/bin/apigee-service edge-management-ui restart
- Classic UI: If you are using the classic UI, then restart the
12-node clustered upgrade
Update the following components for a 12-node clustered installation:
See Installation topologies for the list of Edge topologies and node numbers.
- Update Cassandra and ZooKeeper:
- On machines 1, 2 and 3 in Data Center 1:
/opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
- On machines 7, 8, and 9 in Data Center 2
/opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
- On machines 1, 2 and 3 in Data Center 1:
- Update Postgres:
- Machine 6 in Data Center 1
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Machine 12 in Data Center 2
/opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
- Machine 6 in Data Center 1
- Update LDAP:
- Machine 1 in Data Center 1
/opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
- Machine 7 in Data Center 2
/opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
- Machine 1 in Data Center 1
- Update Edge components:
- Machines 4, 5, 6, 1, 2, 3 in Data Center 1
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
- Machines 10, 11, 12, 7, 8, 9 in Data Center 2
/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
- Machines 4, 5, 6, 1, 2, 3 in Data Center 1
- Update qpidd:
- Machines 4, 5 in Data Center 1
- Update
qpidd
on machine 4:/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update
qpidd
on machine 5:/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update
- Machines 10, 11 in Data Center 2
- Update
qpidd
on machine 10:/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update
qpidd
on machine 11:/opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
- Update
- Machines 4, 5 in Data Center 1
- Update either the new UI (
ue
) or classic UI (ui
):- Machine 1 in Data Center 1:
/opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
- Machine 7 in Data Center 2:
/opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
- Machine 1 in Data Center 1:
- (If you installed
apigee-adminapi
) Updated theapigee-adminapi
utility:- Machine 1 in Data Center 1:
/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
- Machine 7 in Data Center 2:
/opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
- Machine 1 in Data Center 1:
- (If you installed Apigee SSO) Update Apigee SSO:
- Machine 1 in Data Center 1:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
- Machine 7 in Data Center 2:
/opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
Where sso_config_file is the configuration file you created when you installed SSO.
- Machine 1 in Data Center 1:
- Restart the new Edge UI (
edge-management-ui
) or classic Edge UI (edge-ui
) component on machines 1 and 7:/opt/apigee/apigee-service/bin/apigee-service [edge-ui|edge-management-ui] restart
For a non-standard configuration
If you have a non-standard configuration, then update Edge components in the following order:
- ZooKeeper
- Cassandra
- ps
- LDAP
- Edge, meaning the "-c edge" profile on all nodes in the order: nodes with Qpid server, Edge Postgres Server, Management Server, Message Processor, and Router.
- qpidd
- Edge UI (either classic or new)
apigee-adminapi
- Apigee SSO
After you finish updating, be sure to restart the Edge UI component on all machines running it.