This procedure covers upgrading from Apigee hybrid version 1.14.x to Apigee hybrid version 1.15.0.
Changes from Apigee hybrid v1.14
Please note the following change:
-
Large message payload support: Starting in version 1.15 and also in the 1.14.2 patch release, Apigee now supports message payloads up to 30MB. For information see:
- Configure large message payload support in Apigee hybrid
runtime.resources.limits.memory
in the Configuration property reference.runtime.resources.requests.memory
in the Configuration property reference.
- Stricter class instantiation checks: Apigee hybrid,
the JavaCallout policy now includes additional security during Java class instantiation. The enhanced security measure prevents the deployment of policies that directly or indirectly attempt actions that require permissions that are not allowed.
In the most cases, existing policies will continue to function as expected without any issues. However, there is a possibility that policies relying on third-party libraries, or those with custom code that indirectly triggers operations requiring elevated permissions, could be affected.
For additional information about features in Hybrid version 1.14, see the Apigee hybrid v1.14.0 release notes.
Prerequisites
Before upgrading to hybrid version 1.15, make sure your installation meets the following requirements:
- If your hybrid installation is running a version older than v1.14, you must upgrade to version 1.14 before upgrading to v1.15. See Upgrading Apigee hybrid to version 1.14.
- Helm version v3.14.2+.
kubectl
: A supported version ofkubectl
appropriate for your Kubernetes platform version. see Supported platforms and versions:kubectl
.- cert-manager: A supported version of cert-manager. See Supported platforms and versions: cert-manager. If needed, you will upgrade cert-manager in the Prepare to upgrade to version 1.15 section below.
Before you upgrade to 1.15.0 - limitations and important notes
Apigee hybrid 1.15.0 introduces a new enhanced per-environment proxy limit that lets you deploy more proxies and shared flows in a single environment. See Limits: API Proxies to understand the limits on the number of proxies and shared flows you may deploy per environment. This feature is available only on newly created hybrid organizations, and cannot be applied to upgraded orgs. To use this feature, perform a fresh installation of hybrid 1.15.0, and create a new organization.
This feature is available exclusively as part of the 2024 subscription plan, and is subject to the entitlements granted under that subscription. See Enhanced per-environment proxy limits to learn more about this feature.
Upgrading to Apigee hybrid version 1.15 may require downtime.
When upgrading the Apigee controller to version 1.15.0, all Apigee deployments undergo a rolling restart. To minimize downtime in production hybrid environments during a rolling restart, make sure you are running at least two clusters (in the same or different region/data center). Divert all production traffic to a single cluster and take the cluster you are about to upgrade offline, and then proceed with the upgrade process. Repeat the process for each cluster.
Apigee recommends that once you begin the upgrade, you should upgrade all clusters as soon as possible to reduce the chances of production impact. There is no time limit on when all remaining clusters must be upgraded after the first one is upgraded. However, until all remaining clusters are upgraded Cassandra backup and restore cannot work with mixed versions. For example, a backup from Hybrid 1.14 cannot be used to restore a Hybrid 1.15 instance.
Management plane changes do not need to be fully suspended during an upgrade. Any required temporary suspensions to management plane changes are noted in the upgrade instructions below.
Upgrading to version 1.15.0 overview
The procedures for upgrading Apigee hybrid are organized in the following sections:
Prepare to upgrade to version 1.15
Back up your hybrid installation
- These instructions use the environment variable APIGEE_HELM_CHARTS_HOME for the directory
in your file system where you have installed the Helm charts. If needed, change directory
into this directory and define the variable with the following command:
export APIGEE_HELM_CHARTS_HOME=$PWD
echo $APIGEE_HELM_CHARTS_HOME
export APIGEE_HELM_CHARTS_HOME=$PWD
echo $APIGEE_HELM_CHARTS_HOME
set APIGEE_HELM_CHARTS_HOME=%CD%
echo %APIGEE_HELM_CHARTS_HOME%
- Make a backup copy of your version 1.14
$APIGEE_HELM_CHARTS_HOME/
directory. You can use any backup process. For example, you can create atar
file of your entire directory with:tar -czvf $APIGEE_HELM_CHARTS_HOME/../apigee-helm-charts-v1.14-backup.tar.gz $APIGEE_HELM_CHARTS_HOME
- Back up your Cassandra database following the instructions in Cassandra backup and recovery.
- If you are using service cert files (
.json
) in your overrides to authenticate service accounts, make sure your service account cert files reside in the correct Helm chart directory. Helm charts cannot read files outside of each chart directory.This step is not required if you are using Kubernetes secrets or Workload Identity Federation for GKE to authenticate service accounts.
The following table shows the destination for each service account file, depending on your type of installation:
Service account Default filename Helm chart directory apigee-cassandra
PROJECT_ID-apigee-cassandra.json
$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
apigee-logger
PROJECT_ID-apigee-logger.json
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
apigee-mart
PROJECT_ID-apigee-mart.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
apigee-metrics
PROJECT_ID-apigee-metrics.json
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
apigee-runtime
PROJECT_ID-apigee-runtime.json
$APIGEE_HELM_CHARTS_HOME/apigee-env
apigee-synchronizer
PROJECT_ID-apigee-synchronizer.json
$APIGEE_HELM_CHARTS_HOME/apigee-env/
apigee-udca
PROJECT_ID-apigee-udca.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
apigee-watcher
PROJECT_ID-apigee-watcher.json
$APIGEE_HELM_CHARTS_HOME/apigee-org/
Make a copy of the
apigee-non-prod
service account file in each of the following directories:Service account Default filename Helm chart directories apigee-non-prod
PROJECT_ID-apigee-non-prod.json
$APIGEE_HELM_CHARTS_HOME/apigee-datastore/
$APIGEE_HELM_CHARTS_HOME/apigee-telemetry/
$APIGEE_HELM_CHARTS_HOME/apigee-org/
$APIGEE_HELM_CHARTS_HOME/apigee-env/
-
Make sure that your TLS certificate and key files (
.crt
,.key
, and/or.pem
) reside in the$APIGEE_HELM_CHARTS_HOME/apigee-virtualhost/
directory.
Upgrade your Kubernetes version
Check your Kubernetes platform version and if needed, upgrade your Kubernetes platform to a version that is supported by both hybrid 1.14 and hybrid 1.15. Follow your platform's documentation if you need help.
Click to expand a list of supported platforms
1.13 | 1.14 | 1.15 | |||
---|---|---|---|---|---|
GKE on Google Cloud | 1.27.x
1.28.x 1.29.x 1.30.x |
1.28.x
1.29.x 1.30.x 1.31.x |
1.29.x
1.30.x 1.31.x 1.32.x |
||
GKE on AWS |
1.27.x
1.28.x 1.29.x(≥ 1.12.1)(8) 1.30.x |
1.28.x
1.29.x(≥ 1.12.1)(8) 1.30.x 1.31.x |
1.29.x(≥ 1.12.1)(8)
1.30.x 1.31.x 1.32.x |
||
GKE on Azure | 1.27.x
1.28.x 1.29.x(≥ 1.12.1)(8) 1.30.x |
1.28.x
1.29.x(≥ 1.12.1)(8) 1.30.x 1.31.x |
1.29.x(≥ 1.12.1)(8)
1.30.x 1.31.x 1.32.x |
||
Google Distributed Cloud (software only) on VMware (5) | 1.16.x (K8s v1.27.x)
1.28.x(4) 1.29.x 1.30.x |
1.28.x(4)
1.29.x 1.30.x 1.31.x |
1.29.x
1.30.x 1.31.x 1.32.x |
||
Google Distributed Cloud (software only) on bare metal | 1.16.x (K8s v1.27.x)
1.28.x(4) 1.29.x 1.30.x |
1.28.x(4)
1.29.x 1.30.x 1.31.x |
1.29.x
1.30.x 1.31.x 1.32.x |
||
EKS | 1.27.x
1.28.x 1.29.x 1.30.x |
1.28.x
1.29.x 1.30.x 1.31.x 1.32.x |
1.29.x
1.30.x 1.31.x 1.32.x |
||
AKS | 1.27.x
1.28.x 1.29.x 1.30.x |
1.28.x
1.29.x 1.30.x 1.31.x |
1.29.x
1.30.x 1.31.x 1.32.x |
||
OpenShift(9) | 4.12
4.13 4.14 4.15 4.16 |
4.13
4.14 4.15 4.16 4.17 |
4.16
4.17 4.18 |
||
Rancher Kubernetes Engine (RKE) | v1.26.2+rke2r1 v1.27.x 1.28.x 1.29.x 1.30.x |
v1.27.x
1.28.x 1.29.x 1.30.x 1.31.x |
v1.28.x
1.29.x 1.30.x 1.31.x 1.32.x |
||
VMware Tanzu | v1.26.x | v1.26.x | v1.26.x | ||
Components |
1.13 | 1.14 | 1.15 | ||
Cloud Service Mesh | 1.19.x(3) | 1.22.x(3) | 1.22.x(3) | ||
JDK | JDK 11 | JDK 11 | JDK 11 | ||
cert-manager |
1.15.x(10) 1.16.x(10) 1.17.x(10) |
1.15.x(10) 1.16.x(10) 1.17.x(10) |
1.16.x(10) 1.17.x(10) |
||
Cassandra | 4.0 | 4.0 | 4.0 | ||
Kubernetes | 1.27.x 1.28.x 1.29.x 1.30.x |
1.28.x 1.29.x 1.30.x 1.31.x 1.32.x |
1.29.x 1.30.x 1.31.x 1.32.x |
||
kubectl | 1.27.x 1.28.x 1.29.x 1.30.x |
1.28.x 1.29.x 1.30.x 1.31.x 1.32.x |
1.29.x 1.30.x 1.31.x 1.32.x |
||
Helm | 3.14.2+ | 3.14.2+ | 3.14.2+ | ||
Secret Store CSI driver | 1.4.6+ | 1.4.6+ | 1.4.6+ | ||
Vault | 1.15.2 | 1.17.2 | 1.17.2 | ||
(1) On Anthos on-premises (Google Distributed Cloud) version 1.13, follow these instructions to avoid conflict with (2) The official EOL dates for Apigee hybrid versions 1.12 and older have been reached. Regular monthly patches are no longer available. These releases are no longer officially supported except for customers with explicit and official exceptions for continued support. Other customers must upgrade. (3) Cloud Service Mesh is automatically installed with Apigee hybrid 1.9 and newer. (4) GKE on AWS version numbers now reflect the Kubernetes versions. See GKE Enterprise version and upgrade support for version details and recommended patches. (5) Vault is not certified on Google Distributed Cloud for VMware. (6) Support available with Apigee hybrid version 1.10.5 and newer. (7) Support available with Apigee hybrid version 1.11.2 and newer. (8) Support available with Apigee hybrid version 1.12.1 and newer. (9) Apigee hybrid is tested and certified on OpenShift using the Kubernetes version bundled with each specific OCP version. (10) Some versions of cert-manager have an issue where the webhook TLS server may fail to automatically renew its CA certificate. To avoid this, Apigee recommends using: |
1.6
|
1.7
|
1.8
|
1.9
|
1.10
|
1.11
|
1.12
|
|
---|---|---|---|---|---|---|---|
GKE on Google Cloud | 1.19.x 1.20.x 1.21.x |
1.20.x
1.21.x 1.22.x (≥ 1.7.2) 1.23.x (≥ 1.7.2) |
1.21.x (≤ 1.8.3)
1.22.x (≤ 1.8.3) 1.23.x (≤ 1.8.4) 1.24.x (≥ 1.8.4) 1.25.x (≥ 1.8.4) |
1.23.x
1.24.x 1.25.x 1.26.x (≥ 1.9.2) |
1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
GKE on AWS | 1.7.x 1.8.x 1.9.3+ 1.10.x |
1.9.x
1.10.x 1.12.x (≥ 1.7.2) |
1.10.x
1.11.x 1.12.x (K8s v1.23.x) 1.13.x (K8s v1.24.x) 1.14.x (K8s v1.25.x) |
1.12.x
1.13.x (K8s v1.24.x) 1.14.x (K8s v1.25.x) |
1.13.x (K8s v1.24.x)
1.14.x (K8s v1.25.x) 1.26.x(4) 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x (K8s v1.25.x)
1.26.x(4) 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x(4)
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
GKE on Azure | 1.8.x | 1.9.x
1.10.x 1.12.x (≥ 1.7.2) |
1.10.x
1.11.x 1.12.x 1.13.x 1.14.x |
1.12.x
1.13.x 1.14.x |
1.13.x
1.14.x 1.26.x(4) 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x
1.26.x(4) 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x(4)
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
Google Distributed Cloud (software only) on VMware (5) | 1.7.x 1.8.x 1.9.3+ 1.10.x |
1.9.x 1.10.x 1.11.x 1.12.x |
1.10.x 1.11.x 1.12.x 1.13.x 1.14.x 1.15.x (K8s v1.26.x) |
1.12.x 1.13.x 1.14.x 1.15.x (K8s v1.26.x) |
1.13.x (1)
1.14.x 1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x
1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x) 1.28.x(4) 1.29.x(≥ 1.12.1)(8) |
Google Distributed Cloud (software only) on bare metal | 1.7.x 1.8.2+ 1.9.3+ 1.10.x |
1.9.x 1.10.x 1.11.x 1.12.x |
1.10.x 1.11.x 1.12.x 1.13.x 1.14.x 1.15.x |
1.12.x 1.13.x 1.14.x 1.15.x |
1.13.x (1)
1.14.x (K8s v1.25.x) 1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.27.x(4)(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.14.x
1.15.x (K8s v1.26.x) 1.16.x (K8s v1.27.x) 1.28.x(4)(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.15.x (K8s v1.26.x)
1.16.x (K8s v1.27.x) 1.28.x(4) 1.29.x(≥ 1.12.1)(8) |
EKS | 1.19.x 1.20.x 1.21.x |
1.21.x
1.22.x (≥ 1.7.2) 1.23.x (≥ 1.7.2) |
1.22.x (≤ 1.8.3)
1.23.x (≤ 1.8.4) 1.24.x (≥ 1.8.4) 1.25.x (≥ 1.8.4) |
1.23.x
1.24.x 1.25.x 1.26.x |
1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
AKS | 1.19.x 1.20.x 1.21.x |
1.21.x
1.22.x (≥ 1.7.2) 1.23.x (≥ 1.7.2) |
1.22.x (≤ 1.8.3)
1.23.x (≤ 1.8.4) 1.24.x (≥ 1.8.4) 1.25.x (≥ 1.8.4) |
1.23.x
1.24.x 1.25.x 1.26.x(≥ 1.9.2) |
1.24.x
1.25.x 1.26.x 1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
1.25.x
1.26.x 1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
1.26.x
1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
OpenShift(9) | 4.6 4.7 4.8 |
4.7
4.8 |
4.8
4.9 4.10 |
4.10
4.11 |
4.11
4.12 4.14(≥ 1.10.5)(6) 4.15(≥ 1.10.5)(6) |
4.12
4.13 4.14 4.15(≥ 1.11.2)(7) 4.16(≥ 1.11.2)(7) |
4.12
4.13 4.14 4.15 4.16(≥ 1.12.1)(8) |
Rancher Kubernetes Engine (RKE) | N/A | N/A | N/A | 1.3.8 |
v1.26.2+rke2r1
1.27.x(≥ 1.10.5)(6) 1.28.x(≥ 1.10.5)(6) |
v1.26.2+rke2r1
v1.27.x 1.28.x(≥ 1.11.2)(7) 1.29.x(≥ 1.11.2)(7) |
v1.26.2+rke2r1 v1.27.x 1.28.x 1.29.x(≥ 1.12.1)(8) |
VMware Tanzu | N/A | N/A | N/A | N/A | N/A | N/A | v1.26.x |
Components |
1.6
| 1.7
|
1.8
|
1.9
|
1.10 |
1.11 |
1.12 |
Cloud Service Mesh |
1.9.x 1.10.x 1.12.x 1.13.x |
1.10.x 1.11.x 1.12.x 1.13.x 1.14.x 1.15.x (≥ 1.7.6) |
1.11.x 1.12.x 1.13.x 1.14.x 1.15.x |
1.17.x(3) | 1.17.x(3) |
1.17.x(v1.11.0 - v1.11.1)(3)
1.18.x(≥ 1.11.2)(7)(3) |
1.18.x(3) |
JDK | JDK 11 | JDK 11 | JDK 11 | JDK 11 | JDK 11 | JDK 11 | JDK 11 |
cert-manager | 1.5.4 | 1.7.x |
1.7.x 1.8.x 1.9.x 1.10.x 1.11.x |
1.10.x 1.11.x |
1.10.x 1.11.x 1.12.x |
1.11.x 1.12.x 1.13.x |
1.11.x 1.12.x 1.13.x |
Cassandra | 3.11 | 3.11 | 3.11 | 3.11 | 3.11 | 3.11 | 4.0 |
Kubernetes | 1.23.x 1.24.x 1.25.x |
1.24.x 1.25.x 1.26.x |
1.25.x 1.26.x 1.27.x |
1.26.x 1.27.x 1.28.x 1.29.x |
|||
kubectl | 1.24.x 1.25.x 1.26.x |
1.25.x 1.26.x 1.27.x |
1.26.x 1.27.x 1.28.x 1.29.x |
||||
Helm | 3.10+ | 3.10+ | 3.14.2+ | ||||
Secret Store CSI driver | 1.3.4+ | 1.4.1+ | |||||
Vault | 1.13.x | 1.15.2 | |||||
(1) On Anthos on-premises (Google Distributed Cloud) version 1.13, follow these instructions to avoid conflict with (2) The official EOL dates for Apigee hybrid versions 1.12 and older have been reached. Regular monthly patches are no longer available. These releases are no longer officially supported except for customers with explicit and official exceptions for continued support. Other customers must upgrade. (3) Cloud Service Mesh is automatically installed with Apigee hybrid 1.9 and newer. (4) GKE on AWS version numbers now reflect the Kubernetes versions. See GKE Enterprise version and upgrade support for version details and recommended patches. (5) Vault is not certified on Google Distributed Cloud for VMware. (6) Support available with Apigee hybrid version 1.10.5 and newer. (7) Support available with Apigee hybrid version 1.11.2 and newer. (8) Support available with Apigee hybrid version 1.12.1 and newer. (9) Apigee hybrid is tested and certified on OpenShift using the Kubernetes version bundled with each specific OCP version. (10) Some versions of cert-manager have an issue where the webhook TLS server may fail to automatically renew its CA certificate. To avoid this, Apigee recommends using: |
Remove/upgrade Istio CRDs
During upgrade from Apigee hybrid versions 1.14.1 or older, the presence of istio.io
Custom Resource Definitions (CRDs) in an Apigee hybrid cluster may cause failed readiness probes in the discovery
containers of the apigee-ingressgateway-manager
pods.
There are two options to fix this issue:
- Delete the
istio.io
CRDs if you are not using Istio for any purpose other than Apigee in your cluster. - Update the
apigee-ingressgateway-manager clusterrole
to add permissions foristio.io
.
Afer each of the above options, you will need to restart your apigee-ingressgateway-manager
pods.
See Known Issue 416634326 for more information about istio.io
CRDs in Apigee hybrid.
-
Determine if you have
istio.io
CRDs in your cluster with the following command:kubectl get crd -o custom-columns=NAME:metadata.name | grep istio.io
Your output will look something like the following if your cluster has
istio.io
CRDs:kubectl get crd -o custom-columns=NAME:metadata.name | grep istio.io
authorizationpolicies.security.istio.io destinationrules.networking.istio.io envoyfilters.networking.istio.io gateways.networking.istio.io peerauthentications.security.istio.io proxyconfigs.networking.istio.io requestauthentications.security.istio.io serviceentries.networking.istio.io sidecars.networking.istio.io telemetries.telemetry.istio.io virtualservices.networking.istio.io wasmplugins.extensions.istio.io workloadentries.networking.istio.io workloadgroups.networking.istio.io - Optional: Save the CRDs locally in case you need to recreate them:
kubectl get crd $(cat istio-crd.csv) -o yaml > istio-crd.yaml
-
List the
istio.io
CRDs in your cluster to a CSV file:kubectl get crd -o custom-columns=NAME:metadata.name | grep istio.io > istio-crd.csv
- Optional: Save the CRDs locally in case you need to recreate them:
kubectl get crd $(cat istio-crd.csv) -o yaml > istio-crd.yaml
-
Delete the
istio.io
CRDs:Dry run:
kubectl delete crd $(cat istio-crd.csv) --dry-run=client
Execute:
kubectl delete crd $(cat istio-crd.csv)
-
Get the current apigee-ingressgateway-manager clusterrole:
kubectl get clusterrole apigee-ingressgateway-manager-apigee -o yaml > apigee-ingressgateway-manager-apigee-clusterrole.yaml
-
Copy the clusterrole to a new location:
cp apigee-ingressgateway-manager-apigee-clusterrole.yaml apigee-ingressgateway-manager-apigee-clusterrole-added-istio-permissions.yaml
-
Add the following additional permissions to the end of the file:
- apiGroups: - gateway.networking.k8s.io resources: - gatewayclasses - gateways - grpcroutes - httproutes - referencegrants verbs: - get - list - watch - apiGroups: - networking.istio.io resources: - sidecars - destinationrules - gateways - virtualservices - envoyfilters - workloadentries - serviceentries - workloadgroups - proxyconfigs verbs: - get - list - watch - apiGroups: - security.istio.io resources: - peerauthentications - authorizationpolicies - requestauthentications verbs: - get - list - watch - apiGroups: - telemetry.istio.io resources: - telemetries verbs: - get - list - watch - apiGroups: - extensions.istio.io resources: - wasmplugins verbs: - get - list - watch
-
Apply the role:
kubectl -n
APIGEE_NAMESPACE apply -f apigee-ingressgateway-manager-apigee-clusterrole-added-istio-permissions.yaml
After you have completed the above options, you will need to restart your apigee-ingressgateway-manager
pods.
-
List the
ingress-manager
pods to reinstall or recreate:kubectl get deployments -n
APIGEE_NAMESPACE Example output:
NAME READY UP-TO-DATE AVAILABLE AGE apigee-controller-manager 1/1 1 1 32d apigee-ingressgateway-manager 2/2 2 2 32d
-
Restart the
ingress-manager
pods:kubectl rollout restart deployment -n
APIGEE_NAMESPACE apigee-ingressgateway-manager -
After a few minutes, monitor the
apigee-ingressgateway-manager
pods:watch -n 10 kubectl -n
APIGEE_NAMESPACE get pods -l app=apigee-ingressgateway-managerExample output:
NAME READY STATUS RESTARTS AGE apigee-ingressgateway-manager-12345abcde-678wx 3/3 Running 0 10m apigee-ingressgateway-manager-12345abcde-901yz 3/3 Running 0 10m
Install the hybrid 1.15.0 runtime
Configure the data collection pipeline.
Starting with hybrid v1.14, new analytics and debug data pipeline is enabled by default for all Apigee hybrid orgs. You must follow the steps in enable analytics publisher access to configure the authorization flow.
Prepare for the Helm charts upgrade
- Pull the Apigee Helm charts.
Apigee hybrid charts are hosted in Google Artifact Registry:
oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
Using the
pull
command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:export CHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
export CHART_VERSION=1.15.0
helm pull $CHART_REPO/apigee-operator --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-datastore --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-env --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-ingress-manager --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-org --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-redis --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-telemetry --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-virtualhost --version $CHART_VERSION --untar
- Upgrade cert-manager if needed.
If you need to upgrade your cert-manager version, install the new version with the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/
v1.17.2 /cert-manager.yamlSee Supported platforms and versions: cert-manager for a list of supported versions.
- If your Apigee namespace is not
apigee
, edit theapigee-operator/etc/crds/default/kustomization.yaml
file and replace thenamespace
value with your Apigee namespace.apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace:
APIGEE_NAMESPACE If you are using
apigee
as your namespace you do not need to edit the file. - Install the updated Apigee CRDs:
-
Use the
kubectl
dry-run feature by running the following command:kubectl apply -k apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run=server
-
After validating with the dry-run command, run the following command:
kubectl apply -k apigee-operator/etc/crds/default/ \ --server-side \ --force-conflicts \ --validate=false
- Validate the installation with the
kubectl get crds
command:kubectl get crds | grep apigee
Your output should look something like the following:
apigeedatastores.apigee.cloud.google.com 2024-08-21T14:48:30Z apigeedeployments.apigee.cloud.google.com 2024-08-21T14:48:30Z apigeeenvironments.apigee.cloud.google.com 2024-08-21T14:48:31Z apigeeissues.apigee.cloud.google.com 2024-08-21T14:48:31Z apigeeorganizations.apigee.cloud.google.com 2024-08-21T14:48:32Z apigeeredis.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeerouteconfigs.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeeroutes.apigee.cloud.google.com 2024-08-21T14:48:33Z apigeetelemetries.apigee.cloud.google.com 2024-08-21T14:48:34Z cassandradatareplications.apigee.cloud.google.com 2024-08-21T14:48:35Z
-
-
Check the labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the label
cloud.google.com/gke-nodepool=apigee-data
and runtime pods are scheduled on nodes with the labelcloud.google.com/gke-nodepool=apigee-runtime
. You can customize your node pool labels in theoverrides.yaml
file.For more information, see Configuring dedicated node pools.
Install the Apigee hybrid Helm charts
- If you have not, navigate into your
APIGEE_HELM_CHARTS_HOME
directory. Run the following commands from that directory. - Upgrade the Apigee Operator/Controller:
Dry run:
helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify Apigee Operator installation:
helm ls -n
APIGEE_NAMESPACE NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION operator apigee 3 2024-08-21 00:42:44.492009 -0800 PST deployed apigee-operator-1.15.0 1.15.0
Verify it is up and running by checking its availability:
kubectl -n
APIGEE_NAMESPACE get deploy apigee-controller-managerNAME READY UP-TO-DATE AVAILABLE AGE apigee-controller-manager 1/1 1 1 7d20h
- Upgrade the Apigee datastore:
Dry run:
helm upgrade datastore apigee-datastore/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade datastore apigee-datastore/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify
apigeedatastore
is up and running by checking its state:kubectl -n
APIGEE_NAMESPACE get apigeedatastore defaultNAME STATE AGE default running 2d
- Upgrade Apigee telemetry:
Dry run:
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its state:
kubectl -n
APIGEE_NAMESPACE get apigeetelemetry apigee-telemetryNAME STATE AGE apigee-telemetry running 2d
- Upgrade Apigee Redis:
Dry run:
helm upgrade redis apigee-redis/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade redis apigee-redis/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its state:
kubectl -n
APIGEE_NAMESPACE get apigeeredis defaultNAME STATE AGE default running 2d
- Upgrade Apigee ingress manager:
Dry run:
helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace
APIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking its availability:
kubectl -n
APIGEE_NAMESPACE get deployment apigee-ingressgateway-managerNAME READY UP-TO-DATE AVAILABLE AGE apigee-ingressgateway-manager 2/2 2 2 2d
- Upgrade the Apigee organization:
Dry run:
helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespaceAPIGEE_NAMESPACE \ -fOVERRIDES_FILE \ --dry-run=serverUpgrade the chart:
helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespaceAPIGEE_NAMESPACE \ -fOVERRIDES_FILE Verify it is up and running by checking the state of the respective org:
kubectl -n
APIGEE_NAMESPACE get apigeeorgNAME STATE AGE apigee-org1-xxxxx running 2d
- Upgrade the environment.
You must install one environment at a time. Specify the environment with
--set env=
ENV_NAME.Dry run:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE \ --dry-run=server- ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of the
apigee-env
chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME
. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release
anddev-envgroup-release
. For more information on releases in Helm, see Three big concepts class="external" in the Helm documentation. - ENV_NAME is the name of the environment you are upgrading.
- OVERRIDES_FILE is your new overrides file for v.1.15.0
Upgrade the chart:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE Verify it is up and running by checking the state of the respective env:
kubectl -n
APIGEE_NAMESPACE get apigeeenvNAME STATE AGE GATEWAYTYPE apigee-org1-dev-xxx running 2d
- ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of the
-
Upgrade the environment groups (
virtualhosts
).- You must upgrade one environment group (virtualhost) at a time. Specify the environment
group with
--set envgroup=
ENV_GROUP_NAME. Repeat the following commands for each environment group mentioned in the overrides.yaml file:Dry run:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set envgroup=ENV_GROUP_NAME \ -fOVERRIDES_FILE \ --dry-run=serverENV_GROUP_RELEASE_NAME is the name with which you previously installed the
apigee-virtualhost
chart. It is usually ENV_GROUP_NAME.Upgrade the chart:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set envgroup=ENV_GROUP_NAME \ -fOVERRIDES_FILE - Check the state of the ApigeeRoute (AR).
Installing the
virtualhosts
creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls environment group-related details from the control plane. Therefore, check that the corresponding AR's state is running:kubectl -n
APIGEE_NAMESPACE get arcNAME STATE AGE apigee-org1-dev-egroup 2d
kubectl -n
APIGEE_NAMESPACE get arNAME STATE AGE apigee-org1-dev-egroup-xxxxxx running 2d
- You must upgrade one environment group (virtualhost) at a time. Specify the environment
group with
- After you have verified all the installations are upgraded successfully, delete the older
apigee-operator
release from theapigee-system
namespace.- Uninstall the old
operator
release:helm delete operator -n apigee-system
- Delete the
apigee-system
namespace:kubectl delete namespace apigee-system
- Uninstall the old
- Upgrade
operator
again in your Apigee namespace to re-install the deleted cluster-scoped resources:helm upgrade operator apigee-operator/ \ --install \ --namespace
APIGEE_NAMESPACE \ --atomic \ -f overrides.yaml
Validate policies after upgrade from v1.14.0 or earlier
Use this procedure to validate the behavior of the JavaCallout policy after upgrading from 1.14.0.
- Check whether the Java JAR files request unnecessary permissions.
After the policy is deployed, check the runtime logs to see if the following log message is present:
"Failed to load and initialize class ..."
. If you observe this message, it suggests that the deployed JAR requested unnecessary permissions. To resolve this issue, investigate the Java code and update the JAR file. - Investigate and update the Java code.
Review any Java code (including dependencies) to identify the cause of potentially unallowed operations. When found, modify the source code as required.
- Test policies with the security check enabled.
In a non-production environment, enable the security check flag and redeploy your policies with an updated JAR. To set the flag:
- In the
apigee-env/values.yaml
file, setconf_security-secure.constructor.only
totrue
underruntime:cwcAppend:
. For example:# Apigee Runtime runtime: cwcAppend: conf_security-secure.constructor.only: true
- Update the
apigee-env
chart for the environment to apply the change. For example:helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespaceAPIGEE_NAMESPACE \ --set env=ENV_NAME \ -fOVERRIDES_FILE ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of the
apigee-env
chart. This name must be unique from the other Helm release names in your installation. Usually this is the same asENV_NAME
. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for exampledev-env-release
anddev-envgroup-release
. For more information on releases in Helm, see Three big concepts class="external" in the Helm documentation.
If the log message
"Failed to load and initialize class ..."
is still present, continue modifying and testing the JAR until the log message no longer appears. - In the
- Enable the security check in the production environment.
After you have thoroughly tested and verified the JAR file in the non-production environment, enable the security check in your production environment by setting the flag
conf_security-secure.constructor.only
totrue
and updating theapigee-env
chart for the production environment to apply the change.
Rolling back to a previous version
To roll back to the previous version, use the older chart version to roll back the upgrade process in the reverse order. Start with apigee-virtualhost
and work your way back to apigee-operator
, and then revert the CRDs.
- Revert all the charts from
apigee-virtualhost
toapigee-datastore
. The following commands assume you are using the charts from the previous version (v1.14.x).Run the following command for each environment group:
helm upgrade
ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \ --install \ --namespace apigee \ --atomic \ --set envgroup=ENV_GROUP_NAME \ -f1.14_OVERRIDES_FILE Run the following command for each environment:
helm upgrade
ENV_RELEASE_NAME apigee-env/ \ --install \ --namespace apigee \ --atomic \ --set env=ENV_NAME \ -f1.14_OVERRIDES_FILE Revert the remaining charts except for
apigee-operator
.helm upgrade
ORG_NAME apigee-org/ \ --install \ --namespace apigee \ --atomic \ -f1.14_OVERRIDES_FILE helm upgrade ingress-manager apigee-ingress-manager/ \ --install \ --namespace apigee \ --atomic \ -f
1.14_OVERRIDES_FILE helm upgrade redis apigee-redis/ \ --install \ --namespace apigee \ --atomic \ -f
1.14_OVERRIDES_FILE helm upgrade telemetry apigee-telemetry/ \ --install \ --namespace apigee \ --atomic \ -f
1.14_OVERRIDES_FILE helm upgrade datastore apigee-datastore/ \ --install \ --namespace apigee \ --atomic \ -f
1.14_OVERRIDES_FILE - Create the
apigee-system
namespace.kubectl create namespace apigee-system
- Patch the resource annotation back to the
apigee-system
namespace.kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-namespace='apigee-system'
- If you have changed the release name as well, update the annotation with the
operator
release name.kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-name='operator'
- Install
apigee-operator
back into theapigee-system
namespace.helm upgrade operator apigee-operator/ \ --install \ --namespace apigee-system \ --atomic \ -f
1.14_OVERRIDES_FILE - Revert the CRDs by reinstalling the older CRDs.
kubectl apply -k apigee-operator/etc/crds/default/ \ --server-side \ --force-conflicts \ --validate=false
- Clean up the
apigee-operator
release from the APIGEE_NAMESPACE namespace to complete the rollback process.helm uninstall operator -n
APIGEE_NAMESPACE - Some cluster-scoped resources, such as
clusterIssuer
, are deleted whenoperator
is uninstalled. Reinstall them with the following command:helm upgrade operator apigee-operator/ \ --install \ --namespace apigee-system \ --atomic \ -f
1.14_OVERRIDES_FILE