Update Apigee Edge 4.52.02 or 4.53.00 to 4.53.01

Apigee supports upgrading Edge for Private Cloud directly from version 4.52.02 or 4.53.00 to version 4.53.01. This page describes how to perform such upgrades.

For an overview of compatible upgrade paths, see the upgrade compatibility matrix for Edge for Private Cloud releases.

Who can perform the update

The person running the update should be the same as the person who originally installed Edge, or a person running as root.

After you install the Edge RPMs, anyone can configure them.

Which components must you update

You must update all Edge components. Edge does not support a setup that contains components from multiple versions.

Update prerequisites

Review changes in Edge for Private Cloud 4.53.01

A number of security issues were addressed in this version. While these security enhancements are essential, they do introduce some changes that aren't backward-compatible. This means the upgrade will require extra steps to ensure no disruption during or after upgrade. For more information, review this topic thoroughly while upgrading to version 4.53.01 from older private cloud versions.

Make sure of the following prerequisites before upgrading Apigee Edge:

  • Backup all nodes
    Before you update, we recommend that you perform a complete backup of all nodes for safety reasons. Use the procedure for your current version of Edge to perform the backup.

    This allows you to have a backup plan, in case the update to a new version doesn't function properly. For more information on backup, see Backup and Restore.

  • Ensure Edge is running
    Ensure that Edge is up and running during the update process by using the command:
    /opt/apigee/apigee-service/bin/apigee-all status
  • Verify Cassandra prerequisites
    If you previously upgraded from an older version of Edge for Private Cloud to version 4.52.02 or 4.53.00 and are now planning to upgrade to version 4.53.01, make sure you have completed the required post-upgrade steps for Cassandra. These steps are outlined in the version 4.52.02 upgrade documentation under Post upgrade steps. If you are unsure whether these steps were completed during the previous upgrade, complete them again before proceeding with the upgrade to version 4.53.01.
  • Configuring IDP keys and certificates in Edge for Private Cloud 4.53.01

    In Edge for Private Cloud 4.53.01, IDP keys and certificates used in the apigee-sso component are now configured via a keystore. You will need to export the key and certificate you previously used into a keystore. Follow the steps in the Steps for updating Apigee SSO from older versions section for detailed steps before updating the SSO component.

  • Python requirements
    Ensure that all nodes, including Cassandra nodes, have Python 3 installed before attempting the upgrade.

What special steps to consider for upgrade

To upgrade to Edge for Private Cloud 4.53.01, consider running specific steps for upgrading certain software. Necessary steps depend on your current version. Refer to the table below for the various software requiring supplementary steps, and follow the detailed instructions for each. After completing the necessary tasks, return to the main upgrade procedure to continue the upgrade process.

Current version Software that requires special steps for upgrade to 4.53.01
4.52.02 LDAP, Cassandra, Zookeeper, Postgres
4.53.00 LDAP, Zookeeper, Postgres

After performing the necessary steps based on your version, return to the main upgrade procedure to continue.

Automatic propagation of property settings

If you have set any properties by editing .properties files in /opt/apigee/customer/application, then these values are retained by the update.

Required Upgrade to OpenLDAP 2.6

Here is the step-by-step procedure for upgrading Apigee Edge for Private Cloud's underlying LDAP service from the legacy OpenLDAP 2.4 to OpenLDAP 2.6. This upgrade is a mandatory requirement for the update to Apigee Edge for Private Cloud version 4.53.01 and higher. This upgrade is applicable to all Apigee LDAP deployment topologies: single-server, active-passive, and active-active (multi-master).

Prerequisites and considerations

  • Be aware that during the LDAP upgrade process, management APIs and consequently, Apigee UI will be completely unavailable in all regions. All administrative tasks - such as managing users, roles, apps, and organizations will fail and should be paused. There will be no impact to the processing of your API proxy traffic. Please make sure to shutdown all edge-management-server and edge-ui before proceeding further with LDAP upgrade.

  • Backup is Critical: A complete and validated backup of your existing LDAP data is non-negotiable. Proceeding without a valid backup will cause irreversible data loss. The backup must be initiated while the LDAP service is still running to capture a consistent, point-in-time snapshot of the LDAP data. Backup is necessary to perform the actual upgrade. Without backup, you will neither be able to execute the upgrade nor be able to rollback as the upgrade steps involve wiping out LDAP data.

Preparation and installation (All LDAP servers)

The steps in this section (Step 2 through Step 5) are identical for all LDAP deployment topologies. These actions must be performed on every server where the apigee-openldap component is installed, regardless of its role.

  1. Please make sure to shutdown all edge-management-server and edge-ui before proceeding further with LDAP upgrade.
    apigee-service edge-management-server stop
    apigee-service edge-ui stop
  2. Backup existing LDAP data

    Before making any changes, perform a full backup of the current LDAP data from all LDAP servers. This creates a safe restore point.

    • Execute the backup command. This action creates a timestamped backup archive within the /opt/apigee/backup/openldap directory.
      apigee-service apigee-openldap backup
    • Get Total record count: Capture the number of records in your directory for post-upgrade validation (The record count should match across all LDAP servers). This is a sanity check.
      # Note: Replace 'YOUR_PASSWORD' with your current LDAP manager password.
      ldapsearch -o ldif-wrap=no -b "dc=apigee,dc=com" \
      -D "cn=manager,dc=apigee,dc=com" -H ldap://:10389 -LLL -x -w 'YOUR_PASSWORD' | wc -l
  3. Stop LDAP and clean data directories

    This step must be performed on all LDAP servers. It is mandatory due to the major version change and underlying structural differences. A clean directory ensures there are no conflicts. When all LDAP servers are stopped, disruption to Management APIs and UI will begin.

    • Stop the LDAP service.
      apigee-service apigee-openldap stop
    • Permanently remove the old LDAP data and configuration directories.
      rm -rf /opt/apigee/data/apigee-openldap/*
  4. Install and configure the new LDAP version

    On all LDAP servers, use the standard Apigee scripts to download and install the new component version.

    • Install the new LDAP component: The update script reads your configuration file and installs the new apigee-openldap package.
      /opt/apigee/apigee-setup/bin/update.sh -c ldap -f /opt/silent.conf
    • Validate the new LDAP version:After the installation completes reload the profile, verify that the new LDAP version is installed correctly.
      source ~/.bash_profile
      ldapsearch -VV
      Expected output:
      ldapsearch: @(#) $OpenLDAP: ldapsearch 2.6.7
  5. Stop LDAP on all servers prior to data restoration

    This is a critical synchronization step. Before restoring your backup, you must ensure the newly installed LDAP service is stopped on all servers. On every LDAP server, execute the following commands:

    apigee-service apigee-openldap stop
    rm -rf /opt/apigee/data/apigee-openldap/ldap/*
  6. Restore LDAP data

    The strategy is to restore the backup on the first active server. This server will then act as the source of truth, replicating the data to its peers in a multi-server setup.

    1. Identify the first active server for restoration

      • For single-server setup: This is your only LDAP server. You can proceed directly to next step.
      • For active-passive and active-active setups: Run the following diagnostic command on each LDAP server:
        grep -i '^olcSyncrepl:' /opt/apigee/data/apigee-openldap/slapd.d/cn=config/olcDatabase*\ldif
        Note:
        -If this command returns output, the server is a passive server.
        -If it returns no output, the server is the active server.
    2. Restore the backup data

      Before proceeding, double-check that Step 4 has been completed successfully on all LDAP servers.

      • On the first active server you identified above, navigate to the backup directory.
        cd /opt/apigee/backup/openldap
      • Execute the restore command. It is strongly recommended to specify the exact backup timestamp from Step 1 to prevent restoring an unintended or older version.
        # To restore a specific backup (recommended):
        apigee-service apigee-openldap restore 2025.08.11,23.34.00
        
        # To restore the latest available backup by default:
        apigee-service apigee-openldap restore
      • After the restore process completes successfully, start the LDAP service on the first active server.
        apigee-service apigee-openldap start
  7. Start remaining LDAP servers

    If you have a multi server setup, on each of the LDAP servers, start the service:

    apigee-service apigee-openldap start

  8. Final validation

    The final step is to verify that the upgrade was successful and that data is consistent across the entire LDAP cluster.

    • Run the validation command on all LDAP servers. The record count should be identical across all servers and must match the count you captured in Step 1.
    • # Note: Replace 'YOUR_PASSWORD' with your LDAP manager password.
      ldapsearch -o ldif-wrap=no -b "dc=apigee,dc=com" \
      -D "cn=manager,dc=apigee,dc=com" -H ldap://:10389 -LLL -x -w 'YOUR_PASSWORD' | wc -l
    • Once you have confirmed that the data is correct and consistent, your LDAP upgrade is complete. You may now proceed with starting the edge-management-server and edge-ui and any other dependent components as per your organization's standard upgrade procedure.

Required upgrade to Cassandra 4.0.18

Apigee Edge for Private Cloud 4.53.01 includes an upgrade of Cassandra to version 4.0.18.

Upgrades and rollback

  • Upgrading from Cassandra 3.11.X to Cassandra 4.0.X is a smooth process. Cassandra 4.0.X, released with Edge for Private Cloud 4.53.00, is compatible with the runtime and management components of Private Cloud 4.52.02.
  • Direct in-place rollback from Cassandra 4.0.X to 3.11.X is not possible. Rolling back using replicas or backups is a complex procedure and may involve downtime and/or data loss. Troubleshooting issues and upgrading to Cassandra 4.0.X is preferable to rolling back.
  • It is important to familiarize yourself with rollback procedures before attempting the upgrade. Considering the nuances of rollback during the upgrade is critical to ensure appropriate rollback paths are available.

Single data center

Upgrading Cassandra from 3.11.X to 4.0.X within a single data center is seamless, but rollback is complex and may result in downtime and data loss. For production workloads, it's strongly advised to add a new data center with at least Cassandra nodes available in the new data center before initiating the upgrade. This will enable rollback of Cassandra without incurring data loss or disruption to your API traffic. This additional data center can be decommissioned once the upgrade is finished or Checkpoint 2 is reached.

If adding a new data center isn't feasible but rollback capability is still desired, backups will be necessary for restoring Cassandra 3.11.X. However, this method is likely to involve both downtime and data loss.

Multiple data centers

Operating multiple data centers with Edge for Private Cloud 4.52.02 offers more flexibility for rollbacks during the upgrade to Edge for Private Cloud 4.53.00.

  • Rollbacks depend on having at least one data center running the older Cassandra version (3.11.X).
  • If your entire Cassandra cluster is upgraded to 4.0.X, you must not roll back to Cassandra 3.11.X. You must continue using the newer Cassandra version with the other components of Private Cloud 4.53.00 or 4.52.02.
  1. Upgrade one Cassandra data center at a time: Start by upgrading Cassandra nodes individually within a single data center. Complete upgrades of all Cassandra nodes in one data center before proceeding to the next.
  2. Pause and validate: After upgrading one data center, pause to ensure your Private Cloud cluster, especially the upgraded data center, is functioning correctly.
  3. Remember: You can only roll back to the previous Cassandra version if you have at least one data center still running the older version.
  4. Time-sensitive: While you can pause for a short period (a few hours is recommended) to validate functionality, you cannot remain in a mixed-version state indefinitely. This is because a non-uniform Cassandra cluster (with nodes on different versions) has operational limitations.
  5. Thorough testing: Apigee strongly recommends comprehensive testing of performance and functionality before upgrading the next data center. Once all data centers are upgraded, rollback to the earlier version is impossible.
Rollback as a two-checkpoint process
  1. Checkpoint 1: The initial state, with all components on version 4.52.02. Full rollback is possible as long as at least one Cassandra data center remains on the older version.
  2. Checkpoint 2: After all Cassandra nodes in all data centers are updated. You can roll back to this state, but you cannot revert to Checkpoint 1.
Example

Consider a two-data-center (DC) cluster:

  1. Start state: Cassandra nodes in both DCs are on version 3.11.X. All other nodes are on Edge for Private Cloud version 4.52.02. Assume three Cassandra nodes per DC.
  2. Upgrade DC-1: Upgrade the three Cassandra nodes in DC-1 one by one.
  3. Pause and validate: Pause to ensure the cluster, particularly DC-1, is working correctly (check performance, functionality). You can roll back to the initial state using the Cassandra nodes in DC-2. Remember, this pause must be temporary due to the limitations of a mixed-version Cassandra cluster.
  4. Upgrade DC-2: Upgrade the remaining three Cassandra nodes in DC-2. This becomes your new rollback checkpoint.
  5. Upgrade other components: Upgrade management, runtime, and analytics nodes as usual across all data centers, one node and one data center at a time. If issues arise, you can roll back to the state of step 4.

Prerequisites for Cassandra upgrade

You should be running Cassandra 3.11.16 with Edge for Private Cloud 4.52.02 and ensure the following:
  • The entire cluster is operational and fully functional with Cassandra 3.11.16.
  • The compaction strategy is set to LeveledCompactionStrategy (a prerequisite for the upgrade to version 4.52.02).
  • All post-upgrade steps from the initial upgrade to Cassandra 3.11.16 as part of the 4.52.02 upgrade have been completed. If not, rerun these steps. This applies only if you upgraded to Private Cloud version 4.52.02 from an older version.

Step 1: Prepare for upgrade

The steps below are in addition to standard files that you typically create, such as Apigee’s standard configuration file for enabling component upgrades.

  1. Backup Cassandra using Apigee.
  2. Take VM snapshots of Cassandra nodes (if feasible).
  3. Ensure that port 9042 is accessible from all Edge for Private Cloud components, including Management Server, Message Processor, Router, Qpid, and Postgres, to Cassandra nodes if not already configured. Refer to the Port requirements for more information.

Step 2: Upgrade all Cassandra nodes

All Cassandra nodes should be updated one by one in each data center, one data center at a time. Between upgrades of nodes within a data center, wait a few minutes to ensure that an updated node has fully started and joined the cluster before proceeding with upgrading another node in the same data center.

After upgrading all Cassandra nodes within a data center, wait for some time (30 minutes to a few hours) before proceeding with the nodes in the next data center. During this time, thoroughly review the data center that was updated and ensure that the functional and performance metrics of your Apigee cluster are intact. This step is crucial to ensure the stability of the data center where Cassandra has been upgraded to version 4.0.X, while the rest of the Apigee components remain on version 4.52.02.

  1. To upgrade a Cassandra node, Run the following command:
    /opt/apigee/apigee-setup/bin/update.sh -c cs -f configFile
  2. Once a node is updated, run the following command on the node to run some validations before proceeding ahead:
    /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra validate_upgrade -f configFile
  3. The above will output something along the lines of:
    Cassandra version is verified - [cqlsh 6.0.0 | Cassandra 4.0.18 | CQL spec 3.4.5 | Native protocol v5] 
    Metadata is verified
  4. Run the following post_upgrade command on the Cassandra node:
    /opt/apigee/apigee-service/bin/apigee-service apigee-cassandra post_upgrade
  5. Run the following nodetool commands to rebuild indices on the Cassandra node:
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms api_products api_products_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms app_credentials app_credentials_api_products_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms app_credentials app_credentials_organization_app_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms app_credentials app_credentials_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms app_end_user app_end_user_app_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_app_family_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_app_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_app_type_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_parent_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_parent_status_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms apps apps_status_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms maps maps_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_access_tokens oauth_10_access_tokens_app_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_access_tokens oauth_10_access_tokens_consumer_key_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_access_tokens oauth_10_access_tokens_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_access_tokens oauth_10_access_tokens_status_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_request_tokens oauth_10_request_tokens_consumer_key_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_request_tokens oauth_10_request_tokens_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_verifiers oauth_10_verifiers_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_10_verifiers oauth_10_verifiers_request_token_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_20_access_tokens oauth_20_access_tokens_app_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_20_access_tokens oauth_20_access_tokens_client_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_20_access_tokens oauth_20_access_tokens_refresh_token_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_20_authorization_codes oauth_20_authorization_codes_client_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index kms oauth_20_authorization_codes oauth_20_authorization_codes_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect companies companies_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect companies companies_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect companies companies_status_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect company_developers company_developers_company_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect company_developers company_developers_developer_email_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect company_developers company_developers_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect developers developers_email_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect developers developers_organization_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index devconnect developers developers_status_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index cache cache_entries cache_entries_cache_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index audit audits audits_operation_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index audit audits audits_requesturi_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index audit audits audits_responsecode_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index audit audits audits_timestamp_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index audit audits audits_user_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis a_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis a_org_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_a_active_rev
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_a_def_index_template
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_a_def_method_template
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_a_latest_rev
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_a_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_a_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_base_url
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_is_active
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_is_latest
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_org_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_rel_ver
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 apis_revision ar_rev_num
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_a_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_api_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_ar_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_base_url
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_org_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_r_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_r_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_res_path
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 method m_rev_num
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_a_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_api_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_ar_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_base_url
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_org_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_res_path
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 resource r_rev_num
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 schemas s_api_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 schemas s_ar_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 security sa_api_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 security sa_ar_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 template t_a_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 template t_a_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 template t_entity
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 template t_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 template t_org_name
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index apimodel_v2 template_auth au_api_uuid
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index dek keys usecase_index
    If you’re using monetization, also run the following rebuild indices commands related to monetization keyspaces:
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint limits limits_created_date_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint limits limits_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint limits limits_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint limits limits_updated_date_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_created_date_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_currency_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_dev_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_limit_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_prod_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_reason_code_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint suspended_developer_products suspended_developer_products_sub_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint invitations invitations_company_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint invitations invitations_created_at_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint invitations invitations_developer_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint invitations invitations_lastmodified_at_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index mint invitations invitations_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus triggers triggers_env_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus triggers triggers_job_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus triggers triggers_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus job_details job_details_job_class_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus job_details job_details_job_group_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus job_details job_details_job_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus org_triggers org_triggers_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus triggers_suite triggers_suite_group_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus triggers_suite triggers_suite_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index taurus triggers_suite triggers_suite_suite_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index notification notification_service_item notification_service_item_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index notification notification_service_item notification_service_item_status_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index notification notification_service_black_list_item notification_service_black_list_item_org_id_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index notification notification_service_black_list_item notification_service_black_list_item_to_email_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index notification notification_email_template_item notification_email_template_item_name_idx
    /opt/apigee/apigee-cassandra/bin/nodetool rebuild_index notification notification_email_template_item notification_email_template_item_org_id_idx

Step 3: Upgrade all Management nodes

Upgrade all Management nodes in all regions one by one:

/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile

Step 4: Upgrade all Runtime nodes

Upgrade all Routers and Message Processor nodes in all regions one by one:

/opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile

Step 5: Upgrade all remaining Edge for Private Cloud 4.53.01 components

Upgrade all remaining edge-qpid-server and edge-postgres-server nodes in all regions one by one.

Required upgrade to Zookeeper 3.8.4

This release of Edge for Private Cloud includes an upgrade to Zookeeper 3.8.4. As part of that upgrade, all Zookeeper data will be migrated to Zookeeper 3.8.4.

Before upgrading Zookeeper, read through the Zookeeper maintenance guide. Most Edge production systems use a cluster of Zookeeper nodes spread across multiple data centers. Some of these nodes are configured as voters who participate in Zookeeper leader election, and the rest are configured as observers. See About leaders, followers, voters, and observers for more details. The voter nodes elect a leader after which the voter nodes themselves become followers.

During the update process, there could be a momentary delay or write failure into Zookeeper when the leader node is shutdown. This could affect Management operations that write into Zookeeper, such as deployment operation of a proxy, and Apigee infrastructure changes, such as addition or removal of a message processor, etc. There should be no impact on runtime APIs of Apigee (unless these runtime APIs call management APIs) during upgrade of Zookeeper while following the procedure below.

At a high level, the upgrade process involves taking a backup of each node. This is followed by upgrading all observers and followers and finally upgrading the leader node.

Take a backup

Take a backup of all nodes of Zookeeper for use in case a rollback is required. Note that a rollback will restore Zookeeper to the state when the backup was taken. Note: Any deployments or infrastructure changes in Apigee since the backup was taken (whose information is stored in Zookeeper) will be lost during restoration.

  /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper backup

If you are using virtual machines and have the capability, VM snapshots or backups could also be taken for restoration or rollback (if necessary).

Identify leader, followers and observers

Note: The sample commands below use the nc utility to send data to Zookeeper. You could use alternate utilities to send data to Zookeeper as well.

  1. If it is not installed on the ZooKeeper node, install nc:
      sudo yum install nc
  2. Run the following nc command on the node, where 2181 is the ZooKeeper port:
      echo stat | nc localhost 2181

    You should see output like the following:

      Zookeeper version: 3.8.4-5a02a05eddb59aee6ac762f7ea82e92a68eb9c0f, built on 2022-02-25 08:49 UTC
      Clients:
       /0:0:0:0:0:0:0:1:41246[0](queued=0,recved=1,sent=0)
      
      Latency min/avg/max: 0/0.2518/41
      Received: 647228
      Sent: 647339
      Connections: 4
      Outstanding: 0
      Zxid: 0x400018b15
      Mode: follower
      Node count: 100597

    In the Mode line of the output for the nodes, you should see observer, leader, or follower (meaning a voter that is not the leader) depending on the node configuration. Note: In a standalone installation of Edge with a single ZooKeeper node, the Mode is set to standalone.

  3. Repeat steps 1 and 2 on each ZooKeeper node.

Upgrade Zookeeper on the observer and follower nodes

Upgrade Zookeeper on each of the observer and follower nodes as follows:

  1. Download and run bootstrap of Edge for Private Cloud 4.53.01, as described in Update to 4.53.01 on a node with an external internet connection. The process will likely vary depending on whether the node has an external internet connection or you're performing an offline installation.
  2. Upgrade the Zookeeper component:
      /opt/apigee/apigee-setup/bin/update.sh -c zk -f <silent-config-file>
    Note: If these nodes have other components installed (such as Cassandra), you could upgrade them too now (like with cs,zk profile) or you could upgrade the other components later. Apigee recommends that you upgrade Zookeeper only first and ensure your cluster is working properly before upgrading other components.
  3. Repeat above steps on each of Zookeeper observer and follower nodes.

Shutdown the leader

Once all observer and follower nodes have been upgraded, shutdown the leader. On the node identified as leader, run the command below:

  /opt/apigee/apigee-service/bin/apigee-service apigee-zookeeper stop

Note that during this event, before a new leader is elected, there could be momentary delays or write failures in Zookeeper. This could affect operations which write into Zookeeper such as deployment action of proxies or Apigee infrastructure changes, such as addition or removal of message processors, etc.

Verify that the new leader is elected

Using the steps in the Identify leader, followers and observers section above, verify that a new leader has been elected from the followers, once the existing leader is stopped. Note that leader could have been elected in a different data center than the current leader.

Upgrade leader

Follow the same steps as in Upgrading Zookeeper on the observer and follower nodes above.

Once the old leader node is upgraded as well, verify the cluster health and ensure there is a leader node.

Nginx 1.26 upgrade in Edge-Router

Upgrading to Edge for Private Cloud 4.53.01 from previous versions does not automatically upgrade Nginx software to the latest version (1.26.x). This is to prevent any accidental runtime side-effects as a result of the changes documented in Nginx 1.26 changes in Apigee Edge 4.53.01. You can manually upgrade Nginx from 1.20.x to 1.26.x after verification in lower environments. To manually upgrade:

  1. Ensure edge-router node has the latest 4.53.01 software

    /opt/apigee/apigee-service/bin/apigee-service edge-router version
  2. Check and verify the Nginx version you're currently running

    /opt/nginx/sbin/nginx -V

    If you're operating an older version of Nginx, you can follow the steps below to upgrade Nginx to version 1.26.X on the router node.

  3. Stop edge-router process on the router node

    /opt/apigee/apigee-service/bin/apigee-service edge-router stop
  4. Upgrade nginx software on the router node

    dnf update apigee-nginx
  5. Verify that the Nginx version has been updated

    /opt/nginx/sbin/nginx -V
  6. Start router process on the node

    /opt/apigee/apigee-service/bin/apigee-service edge-router start
  7. Repeat the process on each router node, one at a time

Required Upgrade to Postgres 17

This release of Edge includes an upgrade to Postgres 17. As part of that upgrade, all Postgres data is migrated to the Postgres 17.

Most Edge production systems use two Postgres nodes configured for master-standby replication. During the update process, while the Postgres nodes are down for update, analytics data is still written to the Qpid nodes. After the Postgres nodes are updated and back online, analytics data is then pushed to the Postgres nodes.

The way you perform the Postgres update depends on how you configured data storage for your Postgres nodes:

  • If you use local data storage for your Postgres nodes, you must install a new Postgres standby node for the duration of the upgrade. After the upgrade completes, you can decommission the new Postgres standby node.

    The additional Postgres standby node is required if you have to roll back the update for any reason. If you have to roll back the update, the new Postgres standby node becomes the master Postgres node after the rollback. Therefore, when you install the new Postgres standby node, it should be on a node that meets all the hardware requirements of a Postgres server, as defined in the Edge Installation requirements.

    In a 1-node and 2-node configuration of Edge, topologies used for prototyping and testing, you only have a single Postgres node. You can update these Postgres nodes directly without having to create a new Postgres node.

  • If you use network storage for your Postgres nodes, as recommended by Apigee, you do not have to install a new Postgres node. In the procedures below, you can skip the steps that specify to install and later decommission a new Postgres standby node.

    Before you begin the update process, take a network snapshot of the data store used by Postgres. Then, if any errors occur during update and you are forced to perform a roll back, you can restore the Postgres node from that snapshot.

Installing a new Postgres standby node

This procedure creates a Postgres standby server on a new node. Ensure that you install a new Postgres standby server for your existing version of Edge (4.52.02 or 4.53.00), not for version 4.53.01.

To perform the install, use the same config file that you used to install your current version of Edge.

To create a new Postgres standby node:

  1. On the current Postgres master, edit the /opt/apigee/customer/application/postgresql.properties file to set the following token. If that file does not exist, create it:
    conf_pg_hba_replication.connection=host replication apigee existing_standby_ip/32 trust\ \nhost replication apigee new_standby_ip/32 trust

    Where existing_standby_ip is the IP address of the current Postgres standby server and new_standby_ip is the IP address of the new standby node.

  2. Restart apigee-postgresql on the Postgres master:
    /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restart
  3. Verify that the new standby node was added by viewing the /opt/apigee/apigee-postgresql/conf/pg_hba.conf file on the master. You should see the following lines in that file:
    host replication apigee existing_standby_ip/32 trust
    host replication apigee new_standby_ip/32 trust
  4. Install the new Postgres standby server:
    1. Edit the config file that you used to install your current version of Edge to specify the following:
      # IP address of the current master:
      PG_MASTER=192.168.56.103
      # IP address of the new standby node
      PG_STANDBY=192.168.56.102
    2. Disable SELinux as described in Install the Edge apigee-setup utility.
    3. If you are currently on Edge 4.52.02:

      1. Download the Edge bootstrap_4.52.02.sh file to /tmp/bootstrap_4.52.02.sh :
        curl https://software.apigee.com/bootstrap_4.52.02.sh -o /tmp/bootstrap_4.51.00.sh
      2. Install the Edge apigee-service utility and dependencies:
        sudo bash /tmp/bootstrap_4.52.02.sh apigeeuser=uName apigeepassword=pWord

      If you are currently on Edge 4.53.00:

      1. Download the Edge bootstrap_4.53.00.sh file to /tmp/bootstrap_4.53.00.sh :
        curl https://software.apigee.com/bootstrap_4.53.00.sh -o /tmp/bootstrap_4.53.00.sh
      2. Install the Edge apigee-service utility and dependencies:
        sudo bash /tmp/bootstrap_4.53.00.sh apigeeuser=uName apigeepassword=pWord
    4. Use apigee-service to install the apigee-setup utility:
      /opt/apigee/apigee-service/bin/apigee-service apigee-setup install
    5. Install Postgres:
      /opt/apigee/apigee-setup/bin/setup.sh -p ps -f configFile
    6. On the new standby node, run the following command:
      /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql postgres-check-standby

      Verify that it is the standby.

Performing an in-place upgrade of Postgres

Note: You must do the following preliminary step before performing an in-place upgrade of Postgres.

Preliminary step

Before performing an in-place upgrade to Postgres, do the following steps on both the master host and standby, to update the max_locks_per_transaction property on apigee-postgresql:

  1. If not present, create the file /opt/apigee/customer/application/postgresql.properties.
  2. Change the ownership of this file to apigee:
    sudo chown apigee:apigee /opt/apigee/customer/application/postgresql.properties
  3. Add the following property to the file:
    conf/postgresql.conf+max_locks_per_transaction=30000
  4. Configure apigee-postgresql:
    apigee-service apigee-postgresql configure
  5. Restart apigee-postgresql:
    apigee-service apigee-postgresql restart

Perform the in-place upgrade

To perform an in-place upgrade to Postgres 17, do the following steps:

  1. Upgrade postgres on the master host
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f /opt/silent.conf
  2. Run the setup command on the master host:
    apigee-service apigee-postgresql setup -f /opt/silent.conf
  3. Run the configure command on the master host:
    apigee-service apigee-postgresql configure
  4. Restart the master host:
    apigee-service apigee-postgresql restart
  5. Configure it as master:
    apigee-service apigee-postgresql setup-replication-on-master -f /opt/silent.conf
  6. Ensure the master host has started:
    apigee-service apigee-postgresql wait_for_ready
  7. Stop the standby:
    apigee-service apigee-postgresql stop
  8. Upgrade the standby.

    Note: If this step errors/fails, it can be ignored. update.sh will attempt to start the stand-by server with an incorrect configuration. Provided the Postgres installation is upgraded to 17, the error can be ignored.

    /opt/apigee/apigee-setup/bin/update.sh -c ps -f /opt/silent.conf
  9. Ensure the standby is stopped:
    apigee-service apigee-postgresql stop
  10. Remove the old standby configuration:
    rm -rf /opt/apigee/data/apigee-postgresql/
  11. Set up replication on the standby server:
    apigee-service apigee-postgresql setup-replication-on-standby -f /opt/silent.conf
  12. Remove the line conf/postgresql.conf+max_locks_per_transaction=30000 from the file /opt/apigee/customer/application/postgresql.properties on both the master host and standby. This line was added in the preliminary step.

After completing this procedure, the standby will start successfully.

Decommissioning a Postgres node

After the update completes, decommission the new standby node:

  1. Make sure Postgres is running:
    /opt/apigee/apigee-service/bin/apigee-all status

    If Postgres is not running, start it:

    /opt/apigee/apigee-service/bin/apigee-all start
  2. Get the UUID of the new standby node by running the following curl command on the new standby node:
    curl -u sysAdminEmail:password http://node_IP:8084/v1/servers/self

    You should see the UUID of the node at the end of the output, in the form:

    "type" : [ "postgres-server" ],
    "uUID" : "599e8ebf-5d69-4ae4-aa71-154970a8ec75"
  3. Stop the new standby node by running the following command on the new standby node:
    /opt/apigee/apigee-service/bin/apigee-all stop
  4. On the Postgres master node, edit /opt/apigee/customer/application/postgresql.properties to remove the new standby node from conf_pg_hba_replication.connection:
    conf_pg_hba_replication.connection=host replication apigee existing_standby_ip/32 trust
  5. Restart apigee-postgresql on the Postgres master:
    /opt/apigee/apigee-service/bin/apigee-service apigee-postgresql restart
  6. Verify that the new standby node was removed by viewing the /opt/apigee/apigee-postgresql/conf/pg_hba.conf file on the master. You should see only the following line in that file:
    host replication apigee existing_standby_ip/32 trust
  7. Delete the UUID of the standby node from ZooKeeper by making the following Edge management API call on the Management Server node:
    curl -u sysAdminEmail:password -X DELETE http://ms_IP:8080/v1/servers/new_standby_uuid

Post-upgrade steps for Postgres

After a major Postgres upgrade, the internal statistics of Postgres are wiped out. These statistics aid the Postgres query planner in utilizing the most optimal indexes and paths to execute queries.

Postgres can gradually rebuild its statistics over time as queries are executed and when the autovacuum daemon runs. However, until the statistics are rebuilt, your queries may be slow.

To address this issue, execute ANALYZE on all tables in the database on the master Postgres node. Alternatively, you can execute ANALYZE for a few tables at a time.

Steps for updating Apigee SSO from older versions

In Edge for Private Cloud 4.53.01, the IDP keys and certificates used in the apigee-sso component are now configured through a keystore. You will need to export the key and certificate used earlier into a keystore, configure it, and then proceed with the SSO update as usual.

  1. Identify the existing key and certificate used for configuring IDP:
    1. Retrieve the certificate by looking up the value of SSO_SAML_SERVICE_PROVIDER_CERTIFICATE in the SSO installation configuration file or by querying the apigee-sso component for conf_login_service_provider_certificate.

      Use the following command on the SSO node to query apigee-sso for the IDP certificate path. In the output, look for the value in the last line.

      apigee-service apigee-sso configure -search conf_login_service_provider_certificate
    2. Retrieve the key by looking up the value of SSO_SAML_SERVICE_PROVIDER_KEY in the SSO installation configuration file or by querying the apigee-sso component for conf_login_service_provider_key.

      Use the following command on the SSO node to query apigee-sso for the IDP key path. In the output, look for the value on the last line.

      apigee-service apigee-sso configure -search conf_login_service_provider_key
  2. Export the key and certificate to a keystore:
    1. Export the key and certificate to a PKCS12 keystore:
      sudo openssl pkcs12 -export -clcerts -in <certificate_path> -inkey <key_path> -out <keystore_path> -name <alias>

      Parameters:

      • certificate_path: Path to the certificate file retrieved in Step 1.a.
      • key_path: Path to the private key file retrieved in Step 1.b.
      • keystore_path: Path to the newly created keystore containing the certificate and private key.
      • alias: Alias used for the key and certificate pair within the keystore.

      Refer to the OpenSSL documentation for more details.

    2. (Optional) Export the key and certificate from PKCS12 to a JKS keystore:
      sudo keytool -importkeystore -srckeystore <PKCS12_keystore_path> -srcstoretype PKCS12 -destkeystore <destination_keystore_path> -deststoretype JKS -alias <alias>

      Parameters:

      • PKCS12_keystore_path: Path to the PKCS12 keystore created in Step 2.a, containing the certificate and key.
      • destination_keystore_path: Path to the new JKS keystore where the certificate and key will be exported.
      • alias: Alias used for the key and certificate pair within the JKS keystore.
    3. Refer to the keytool documentation for more details.

  3. Change the owner of the output keystore file to the "apigee" user:
    sudo chown apigee:apigee <keystore_file>
  4. Add the following properties in Apigee SSO configuration file and update them with the keystore file path, password, keystore type, and alias:
    # Path to the keystore file
    SSO_SAML_SERVICE_PROVIDER_KEYSTORE_PATH=${APIGEE_ROOT}/apigee-sso/source/conf/keystore.jks
    
    # Keystore password
    SSO_SAML_SERVICE_PROVIDER_KEYSTORE_PASSWORD=Secret123  # Password for accessing the keystore
    
    # Keystore type
    SSO_SAML_SERVICE_PROVIDER_KEYSTORE_TYPE=JKS  # Type of keystore, e.g., JKS, PKCS12
    
    # Alias within keystore that stores the key and certificate
    SSO_SAML_SERVICE_PROVIDER_KEYSTORE_ALIAS=service-provider-cert 
  5. Update Apigee SSO software on the SSO node as usual using the following command:
    /opt/apigee/apigee-setup/bin/update.sh -c sso -f /opt/silent.conf

New Edge UI

This section lists considerations regarding the Edge UI. For more information, see The new Edge UI for Private Cloud.

Install the Edge UI

After you complete the initial installation, Apigee recommends that you install the Edge UI, which is an enhanced user interface for developers and administrators of Apigee Edge for Private Cloud.

Note that the Edge UI requires that you disable Basic authentication and use an IDP such as SAML or LDAP.

For more information, see Install the new Edge UI.

Update with Apigee mTLS

To update Apigee mTLS , do the following steps:

Rolling back an update

In the case of an update failure, you can try to correct the issue, and then execute update.sh again. You can run the update multiple times and it continues the update from where it last left off.

If the failure requires that you roll back the update to your previous version, see Roll back 4.53.01 for detailed instructions.

Logging update information

By default, the update.sh utility writes log information to:

/opt/apigee/var/log/apigee-setup/update.log

If the person running the update.sh utility does not have access to that directory, it writes the log to the /tmp directory as a file named update_username.log.

If the person does not have access to /tmp, the update.sh utility fails.

Zero-downtime update

A zero-downtime update, or rolling update, lets you update your Edge installation without bringing down Edge.

Zero-downtime update is only possible with a 5-node configuration and larger.

The key to zero-downtime upgrading is to remove each Router, one at a time, from the load balancer. You then update the Router and any other components on the same machine as the Router, and then add the Router back to the load balancer.

  1. Update the machines in the correct order for your installation as described Order of machine update.
  2. When it is time to update the Routers, select any one Router and make it unreachable, as described in Enabling/Disabling server (Message Processor/Router) reachability.
  3. Update the selected Router and all other Edge components on the same machine as the Router. All Edge configurations show a Router and Message Processor on the same node.
  4. Make the Router reachable again.
  5. Repeat steps 2 through 4 for the remaining Routers.
  6. Continue the update for any remaining machines in your installation.

Take care of the following before and after the update:

Use a silent configuration file

You must pass a silent configuration file to the update command. The silent configuration file should be the same one that you used to install Edge for Private Cloud 4.52.02 or 4.53.00.

Update to 4.53.01 on a node with an external internet connection

Use the following procedure to update the Edge components on a node:

  1. If present, disable any cron jobs configured to perform a repair operation on Cassandra until after the update completes.
  2. Log in to your node as root to install the Edge RPMs.
  3. Disable SELinux as described in Install the Edge apigee-setup utility.
  4. If you are installing on AWS, execute the following yum-configure-manager commands:
    yum update rh-amazon-rhui-client.noarch
    sudo yum-config-manager --enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional
  5. If you are currently on Edge 4.52.02 or 4.53.00:

    1. Download the Edge bootstrap_4.53.01.sh file to /tmp/bootstrap_4.53.01.sh:
      curl https://software.apigee.com/bootstrap_4.53.01.sh -o /tmp/bootstrap_4.53.01.sh
    2. Install the Edge 4.53.01 apigee-service utility and dependencies by executing the following command:
      sudo bash /tmp/bootstrap_4.53.01.sh apigeeuser=uName apigeepassword=pWord

      Where uName:pWord are the username and password you received from Apigee. If you omit pWord, you will be prompted to enter it.

      By default, the installer checks that you have Java 1.8 installed. If you do not, the installer installs it for you.

      Use the JAVA_FIX option to specify how to handle Java installation. JAVA_FIX takes the following values:

      • I: Install OpenJDK 1.8 (default).
      • C: Continue without installing Java.
      • Q: Quit. For this option, you must install Java yourself.
    3. Use apigee-service to update the apigee-setup utility, as the following example shows:
      /opt/apigee/apigee-service/bin/apigee-service apigee-setup update
    4. Update the apigee-validate utility on the Management Server, as the following example shows:
      /opt/apigee/apigee-service/bin/apigee-service apigee-validate update
    5. Update the apigee-provision utility on the Management Server, as the following example shows:
      /opt/apigee/apigee-service/bin/apigee-service apigee-provision update
    6. Run the update utility on your nodes by executing the following command:
      /opt/apigee/apigee-setup/bin/update.sh -c component -f configFile

      Do this in the order described in Order of machine update.

      Where:

      • component is the Edge component to update. Possible values include:
        • cs: Cassandra
        • edge: All Edge components except Edge UI: Management Server, Message Processor, Router, QPID Server, Postgres Server
        • ldap: OpenLDAP
        • ps: postgresql
        • qpid: qpidd
        • sso: Apigee SSO (if you installed SSO)
        • ue: New Edge UI
        • ui: Classic Edge UI
        • zk: Zookeeper
      • configFile is the same configuration file that you used to define your Edge components during the 4.52.02 or 4.53.00 installation.

      You can run update.sh against all components by setting component to "all", but only if you have an Edge all-in-one (AIO) installation profile. For example:

      /opt/apigee/apigee-setup/bin/update.sh -c all -f ./sa_silent_config
    7. Restart the Edge UI components on all nodes running them, if you haven't done so already:
      /opt/apigee/apigee-service/bin/apigee-service [edge-management-ui|edge-ui] restart
    8. Test the update by running the apigee-validate utility on the Management Server, as described in Test the install.

If you later decide to roll back the update, use the procedure described in Roll back 4.53.01.

Update to 4.53.01 from a local repo

If your Edge nodes are behind a firewall, or in some other way are prohibited from accessing the Apigee repository over the Internet, then you can perform the update from a local repository, or mirror, of the Apigee repo.

After you create a local Edge repository, you have two options for updating Edge from the local repo:

  • Create a .tar file of the repo, copy the .tar file to a node, and then update Edge from the .tar file.
  • Install a webserver on the node with the local repo so that other nodes can access it. Apigee provides the Nginx webserver for you to use, or you can use your own webserver.

To update from a local 4.53.01 repo:

  1. Create a local 4.53.01 repo as described in "Create a local Apigee repository" at Install the Edge apigee-setup utility.
  2. To install apigee-service from a .tar file:
    1. On the node with the local repo, use the following command to package the local repo into a single .tar file named /opt/apigee/data/apigee-mirror/apigee-4.53.01.tar.gz:
      /opt/apigee/apigee-service/bin/apigee-service apigee-mirror package
    2. Copy the .tar file to the node where you want to update Edge. For example, copy it to the /tmp directory on the new node.
    3. On the new node, untar the file to the /tmp directory:
      tar -xzf apigee-4.53.01.tar.gz

      This command creates a new directory, named repos, in the directory containing the .tar file. For example /tmp/repos.

    4. Install the Edge apigee-service utility and dependencies from /tmp/repos:
      sudo bash /tmp/repos/bootstrap_4.53.01.sh apigeeprotocol="file://" apigeerepobasepath=/tmp/repos

      Notice that you include the path to the repos directory in this command.

  3. To install apigee-service using the Nginx webserver:
    1. Configure the Nginx web server as described in "Install from the repo using the Nginx webserver" at Install the Edge apigee-setup utility.
    2. On the remote node, download the Edge bootstrap_4.53.01.sh file to /tmp/bootstrap_4.53.01.sh:
      /usr/bin/curl http://uName:pWord@remoteRepo:3939/bootstrap_4.53.01.sh -o /tmp/bootstrap_4.53.01.sh

      Where uName:pWord are the username and password you set previously for the repo, and remoteRepo is the IP address or DNS name of the repo node.

    3. On the remote node, install the Edge apigee-setup utility and dependencies:
      sudo bash /tmp/bootstrap_4.53.01.sh apigeerepohost=remoteRepo:3939 apigeeuser=uName apigeepassword=pWord apigeeprotocol=http://

      Where uName:pWord are the repo username and password.

  4. Use apigee-service to update the apigee-setup utility, as the following example shows:
    /opt/apigee/apigee-service/bin/apigee-service apigee-setup update 
  5. Update the apigee-validate utility on the Management Server, as the following example shows:
    /opt/apigee/apigee-service/bin/apigee-service apigee-validate update
  6. Update the apigee-provision utility on the Management Server, as the following example shows:
    /opt/apigee/apigee-service/bin/apigee-service apigee-provision update
  7. Run the update utility on your nodes in the order described in Order of machine update:
    /opt/apigee/apigee-setup/bin/update.sh -c component -f configFile

    Where:

    • component is the Edge component to update. You typically update the following components:
      • cs: Cassandra
      • edge: All Edge components except Edge UI: Management Server, Message Processor, Router, QPID Server, Postgres Server
      • ldap: OpenLDAP
      • ps: postgresql
      • qpid: qpidd
      • sso: Apigee SSO (if you installed SSO)
      • ue New Edge UI
      • ui: Classic Edge UI
      • zk: Zookeeper
    • configFile is the same configuration file that you used to define your Edge components during the 4.52.02 or 4.53.00 installation.

    You can run update.sh against all components by setting component to "all", but only if you have an Edge all-in-one (AIO) installation profile. For example:

    /opt/apigee/apigee-setup/bin/update.sh -c all -f /tmp/sa_silent_config
  8. Restart the UI components on all nodes running it, if you haven't done so already:
    /opt/apigee/apigee-service/bin/apigee-service [edge-management-ui|edge-ui] restart
  9. Test the update by running the apigee-validate utility on the Management Server, as described in Test the install.

If you later decide to roll back the update, use the procedure described in Roll back 4.53.01.

Order of machine update

The order that you update the machines in an Edge installation is important:

  • You must update all LDAP nodes before updating any other components. You will need to follow special steps to upgrade LDAP.
  • You must update all Cassandra and ZooKeeper nodes. If you’re upgrading from 4.52.02 then, follow the special steps to upgrade cassandra. You will need to follow the special steps to upgrade Zookeeper for 4.52.02 or 4.53.00.
  • You must upgrade all the Management Servers and Router & Message Processors using the -c edge option to update them.
  • You must upgrade all Postgres nodes following the special steps for upgrade Postgres.
  • You must update edge-qpid-server & edge-postgres-server components across all data centers.
  • You must upgrade all Qpid nodes.
  • You must upgrade Edge UI nodes and also upgrade the New Edge UI and SSO nodes(if applicable).
  • There is no separate step to update Monetization. It is updated when you specify the -c edge option.

1-node standalone upgrade

To upgrade a 1-node standalone configuration to 4.53.01:

  1. Update all components:
    /opt/apigee/apigee-setup/bin/update.sh -c all -f configFile
  2. (If you installed apigee-adminapi) Update the apigee-adminapi utility:
    /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update

2-node standalone upgrade

Update the following components for a 2-node standalone installation:

See Installation topologies for the list of Edge topologies and node numbers.

  1. Update LDAP on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
  2. Update Cassandra and ZooKeeper on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
  3. Update Edge components on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  4. Update Postgres on machine 2:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  5. Update Edge components on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  6. Update Qpid on Machine 2:
    /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
  7. Update the UI on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c ui -f configFile
  8. (If you installed apigee-adminapi) Updated the apigee-adminapi utility on machine 1:
    /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
  9. (If you installed Apigee SSO) Update Apigee SSO on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file

    Where sso_config_file is the configuration file you created when you installed SSO.

  10. Restart the Edge UI component on machine 1:
    /opt/apigee/apigee-service/bin/apigee-service edge-ui restart

5-node upgrade

Update the following components for a 5-node installation:

See Installation topologies for the list of Edge topologies and node numbers.

  1. Update LDAP on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
  2. Update Cassandra and ZooKeeper on machine 1, 2, and 3:
    /opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
  3. Update Edge components on machine 1, 2, 3:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  4. Update Postgres on machine 4:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  5. Update Postgres on machine 5:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  6. Update Edge components on machine 4, 5:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  7. Update Qpid on machine 4:
    /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
  8. Update Qpid on machine 5:
    /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
  9. Update the Edge UI:
    • Classic UI: If you are using the classic UI, then update the ui component on machine 1, as the following example shows:
      /opt/apigee/apigee-setup/bin/update.sh -c ui -f configFile
    • New Edge UI: If you installed the new Edge UI, then update the ue component on the appropriate machine (may not be machine 1):
      /opt/apigee/apigee-setup/bin/update.sh -c ue -f /opt/silent.conf
  10. (If you installed apigee-adminapi) Updated the apigee-adminapi utility on machine 1:
    /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
  11. (If you installed Apigee SSO) Update Apigee SSO on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file

    Where sso_config_file is the configuration file you created when you installed SSO.

  12. Restart the UI component:
    • Classic UI: If you are using the classic UI, then restart the edge-ui component on machine 1, as the following example shows:
      /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
    • New Edge UI: If you installed the new Edge UI, then restart the edge-management-ui component on the appropriate machine (may not be machine 1):
      /opt/apigee/apigee-service/bin/apigee-service edge-management-ui restart

9-node clustered upgrade

Update the following components for a 9-node clustered installation:

See Installation topologies for the list of Edge topologies and node numbers.

  1. Update LDAP on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
  2. Update Cassandra and ZooKeeper on machine 1, 2, and 3:
    /opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
  3. Update Edge components on machine 1, 4, and 5 (Management server, message processor, router) in that order:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  4. Update Postgres on machine 8:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  5. Update Postgres on machine 9:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  6. Update Edge components on machine 6, 7, 8, and 9 in that order:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  7. Update Qpid on machines 6 and 7:
    /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
  8. Update either the new UI (ue) or classic UI (ui) on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
  9. (If you installed apigee-adminapi) Update the apigee-adminapi utility on machine 1:
    /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
  10. (If you installed Apigee SSO) Update Apigee SSO on machine 1:
    /opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file

    Where sso_config_file is the configuration file you created when you installed SSO.

  11. Restart the UI component:
    • Classic UI: If you are using the classic UI, then restart the edge-ui component on machine 1, as the following example shows:
      /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
    • New Edge UI: If you installed the new Edge UI, then restart the edge-management-ui component on the appropriate machine (may not be machine 1):
      /opt/apigee/apigee-service/bin/apigee-service edge-management-ui restart

13-node clustered upgrade

Update the following components for a 13-node clustered installation:

See Installation topologies for the list of Edge topologies and node numbers.

  1. Update LDAP on machine 4 and 5:
    /opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
  2. Update Cassandra and ZooKeeper on machines 1, 2, and 3:
    /opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
  3. Update Edge components on machines 6, 7, 10, and 11 in that order:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  4. Update Postgres on machine 8:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  5. Update Postgres on machine 9:
    /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  6. Update Edge components on machines 12, 13, 8, and 9 in that order:
    /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  7. Update Qpid on machines 12 and 13:
    /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
  8. Update either the new UI (ue) or classic UI (ui) on machines 6 and 7:
    /opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
  9. (If you installed apigee-adminapi) Updated the apigee-adminapi utility on machines 6 and 7:
    /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
  10. (If you installed Apigee SSO) Update Apigee SSO on machines 6 and 7:
    /opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file

    Where sso_config_file is the configuration file you created when you installed SSO.

  11. Restart the UI component:
    • Classic UI: If you are using the classic UI, then restart the edge-ui component on machines 6 and 7, as the following example shows:
      /opt/apigee/apigee-service/bin/apigee-service edge-ui restart
    • New Edge UI: If you installed the new Edge UI, then restart the edge-management-ui component on machines 6 and 7:
      /opt/apigee/apigee-service/bin/apigee-service edge-management-ui restart

12-node clustered upgrade

Update the following components for a 12-node clustered installation:

See Installation topologies for the list of Edge topologies and node numbers.

  1. Update LDAP:
    1. Machine 1 in Data Center 1
      /opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
    2. Machine 7 in Data Center 2
      /opt/apigee/apigee-setup/bin/update.sh -c ldap -f configFile
  2. Update Cassandra and ZooKeeper:
    1. Machines 1, 2 and 3 in Data center 1:
      /opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
    2. On machines 7, 8, and 9 in Data Center 2:
      /opt/apigee/apigee-setup/bin/update.sh -c cs,zk -f configFile
  3. Update Edge components:
    1. On machines 1, 2 and 3 in Data Center 1:
      /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
    2. On machines 7, 8, and 9 in Data Center 2
      /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  4. Update Postgres:
    1. Machine 6 in Data Center 1
      /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
    2. Machine 12 in Data Center 2
      /opt/apigee/apigee-setup/bin/update.sh -c ps -f configFile
  5. Update Edge components:
    1. Machines 4, 5, 6 in Data Center 1
      /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
    2. Machines 10, 11, 12 in Data Center 2
      /opt/apigee/apigee-setup/bin/update.sh -c edge -f configFile
  6. Update qpidd:
    1. Machines 4, 5 in Data Center 1
      1. Update qpidd on machine 4:
        /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
      2. Update qpidd on machine 5:
        /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
    2. Machines 10, 11 in Data Center 2
      1. Update qpidd on machine 10:
        /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
      2. Update qpidd on machine 11:
        /opt/apigee/apigee-setup/bin/update.sh -c qpid -f configFile
  7. Update either the new UI (ue) or classic UI (ui):
    1. Machine 1 in Data Center 1:
      /opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
    2. Machine 7 in Data Center 2:
      /opt/apigee/apigee-setup/bin/update.sh -c [ui|ue] -f configFile
  8. (If you installed apigee-adminapi) Updated the apigee-adminapi utility:
    1. Machine 1 in Data Center 1:
      /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
    2. Machine 7 in Data Center 2:
      /opt/apigee/apigee-service/bin/apigee-service apigee-adminapi update
  9. (If you installed Apigee SSO) Update Apigee SSO:
    1. Machine 1 in Data Center 1:
      /opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
    2. Machine 7 in Data Center 2:
      /opt/apigee/apigee-setup/bin/update.sh -c sso -f sso_config_file
    3. Where sso_config_file is the configuration file you created when you installed SSO.

  10. Restart the new Edge UI (edge-management-ui) or classic Edge UI (edge-ui) component on machines 1 and 7:
    /opt/apigee/apigee-service/bin/apigee-service [edge-ui|edge-management-ui] restart

For a non-standard configuration

If you have a non-standard configuration, then update Edge components in the following order:

  1. LDAP
  2. Cassandra
  3. Management Server
  4. Message Processor
  5. Router
  6. Zookeeper
  7. Postgres
  8. Edge, meaning the "-c edge" profile on all nodes in the order: nodes with Qpid server, Edge Postgres Server.
  9. qpidd
  10. Edge UI (either classic or new)
  11. apigee-adminapi
  12. Apigee SSO

After you finish updating, be sure to restart the Edge UI component on all machines running it.