This guide describes how to install and deploy Apigee Edge for Private Cloud and API proxies in an air-gapped Google Distributed Cloud (GDC) environment. GDC air-gapped offerings, including Apigee Edge for Private Cloud, don't require connectivity to Google Cloud to manage infrastructure and services. You can use a local control plane hosted on your premises for all operations. For an overview of GDC air-gapped, see the overview.
This guide is intended for Apigee operators who are familiar with Apigee Edge for Private Cloud and have a basic understanding of Kubernetes.
Overview of required steps
To install and deployApigee Edge for Private Cloud in an air-gapped GDC environment, the operator must complete the following steps:
- Obtain the installation files for Apigee Edge for Private Cloud.
- Set up a storage bucket.
- Set up a repository node.
- Deploy Apigee nodes.
- Set up a control node.
- Configure Ansible.
- Install the Apigee components.
- Deploy pods and services.
- Update the uploader and Fluent Bit forwarder IPs.
- Onboard an Apigee organization.
- Test HTTP connectivity.
- Configure TLS and test HTTPS.
Before you begin
Before you begin the installation process, make sure to complete the following steps:
- Create a GDC project to use for the installation, if you don't already have one. For more information, see Create a project.
- Download, install,
and configure
the
gdcloud
CLIon a GDC connected workstation or within your organization's continuous deployment environment. - Get the credentials required to use the
gdcloud
CLI andkubectl
API. See Authenticate your account for access for the required steps. - Confirm the Apigee username and password you received from your Apigee account manager.
- Confirm the name of your GKE admin cluster and the name of your GKE user cluster.
Capacity requirements
Installing Apigee Edge for Private Cloud on GDC requires several virtual machines (VMs) with specific resource allocations. These VMs incur charges based on their compute resources (RAM, vCPU cores) and local disk storage. For more information, see Pricing.
The following table shows the resource requirements for each VM:
VM type | RAM | vCPU cores | Disk storage |
---|---|---|---|
Repo node | 8GB | 2-vCPU cores | 64GB |
Control node | 8GB | 2-vCPU cores | 64GB |
Apigee API management nodes 1, 2, and 3 | 16GB RAM | 8-vCPU cores | 670 GB |
Apigee API management nodes 4 and5 | 16GB RAM | 8-vCPU cores | 500 GB - 1TB |
Roles and permissions
The following roles and permissions are required to deploy Apigee Edge for Private Cloud in an air-gapped GDC environment:
- Platform Administrator (PA): Assign the
IAM Admin
role. - Application Operator (AO): Assign the following roles:
Harbor Instance Admin
: Has full access to manage Harbor instances in a project.LoggingTarget Creator
: CreatesLoggingTarget
custom resources in the project namespace.LoggingTarget Editor
: EditsLoggingTarget
custom resources in the project namespace.Project Bucket Admin
: Manages the storage buckets and objects within bucketsProject Grafana Viewer
: Accesses the monitoring instance in the project namespace.Project NetworkPolicy Admin
: Manages the project network policies in the project namespace.Project VirtualMachine Admin
: Manages the virtual machines in the project namespace.Secret Admin
: Manages Kubernetes secrets in projects.Service Configuration Admin
: Has read and write access to service configurations within a project namespace.Namespace Admin
: Manages all resources within project namespaces.
- Apigee on GDC air-gapped does not come with DNS servers and uses local DNS resolution as a workaround. If Apigee on GDC air-gapped is deployed in an environment with external DNS servers, replace the steps that configure local DNS with configuring DNS entries in the DNS servers.
- Apigee on GDC air-gapped does not include a stand-alone SMTP server. You can configure an SMTP server at any time to enable outbound email notifications for account creation and password resets from the Management Server and Management UI. Management APIs remain available for Apigee user account management. See Configuring the Edge SMTP server for more information.
- Apigee on GDC air-gapped does not implement intrusion detection and prevention. Install and configure an Intrusion Prevention System (IPS), such as Snort, to detect and prevent malicious activities.
- Operating System: Rocky Linux 8
- Machine size: 8GB RAM; 2-vCPU cores; 64GB local disk storage
- Connectivity:
- Ingress: TCP 22 (SSH)
- Egress: Internet
- Check the Apigee Edge for Private Cloud release notes for the latest official release version supported for GDC, as noted in the Edge for Private Cloud column.
- Download the Edge setup file:
curl https://software.apigee.com/apigee/tarball/VERSION/rocky8/archive.tar -o /tmp/archive.tar -u 'APIGEE_USER:APIGEE_PASSWORD'
Where:
- APIGEE_USER is the username you received for the Apigee organization.
- APIGEE_PASSWORD is the password you received for the Apigee organization.
- VERSION is the Apigee Edge for Private Cloud release version for use on GDC you intend to install, for example, 4.53.01.
- Download the latest Apigee Edge for Private Cloud
bootstrap_VERSION.sh
file to/tmp/bootstrap_VERSION.sh
:curl https://software.apigee.com/bootstrap_VERSION.sh -o /tmp/bootstrap_VERSION.sh
Where VERSION is the latest Apigee Edge for Private Cloud release version for use on GDC you intend to install, for example, 4.53.01.
- Install the Edge
apigee-service
utility and dependencies:sudo bash /tmp/bootstrap_VERSION.sh apigeeuser=APIGEE_USER apigeepassword=APIGEE_PASSWORD
Where:
- APIGEE_USER is the username you received for the Apigee organization.
- APIGEE_PASSWORD is the password you received for the Apigee organization.
- VERSION is the Apigee Edge for Private Cloud release version for use on GDC you intend to install.
- Run the setup script on the connected node:
chmod a+x connected-node_setup.sh \ ./connected-node_setup.sh
In this step, the script generates the required files in the following locations (for example, for version 4.53.01):
/opt/apigee/data/apigee-mirror/apigee-4.53.01.tar.gz
/tmp/apigee-nginx/apigee-nginx.tar
/tmp/fluentbit/fluentbit.tar
/tmp/postgresql14/postgresql14.tar
/tmp/ansible-rpms.tar
/tmp/apigee-repos.tar
- Transfer the required files from the connected node to a local machine via SSH:
mkdir apigee-files
cd apigee-files
for file in /opt/apigee/data/apigee-mirror/apigee-4.53.01.tar.gz /tmp/ansible-rpms.tar /tmp/apigee-nginx/apigee-nginx.tar /tmp/fluentbit/fluentbit.tar /tmp/postgresql14/postgresql14.tar /tmp/apigee-repos.tar; do scp -i SSH_PRIVATE_KEY_FILE USER@CONNECTED_NODE_IP:$file . done
Where:
- SSH_PRIVATE_KEY_FILE is the path to the SSH private key file.
- USER is the username for the connected node.
- CONNECTED_NODE_IP is the IP address of the connected node.
- Authenticate with the org admin cluster:
gdcloud auth login --login-config-cert WEB_TLS_CERT
gdcloud clusters get-credentials ORG_ADMIN_CLUSTER
Where:
- WEB_TLS_CERT is the path to the web TLS certificate.
- ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.
- Set the project and bucket environment variables:
export PROJECT=PROJECT
export BUCKET=BUCKET_NAME
Where:
- PROJECT is the name of your GDC project.
- BUCKET_NAME is the name of the bucket you want to create for storing Apigee Edge for Private Cloud backup files.
- Apply the bucket configuration:
kubectl apply -f - <<EOF apiVersion: object.GDC.goog/v1 kind: Bucket metadata: name: $BUCKET namespace:$PROJECT spec: description: bucket for Apigee backup files storageClass: Standard bucketPolicy : lockingPolicy : defaultObjectRetentionDays: 30 EOF
This configuration creates a bucket with a retention period of 30 days.
- Create a service account in the project:
gdcloud iam service-accounts create $BUCKET-sa \ --project=$PROJECT
- Create the role and role binding to generate a secret for accessing the bucket:
kubectl apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: $BUCKET-role namespace: $PROJECT rules: - apiGroups: - object.gdc.goog resourceNames: - $BUCKET resources: - buckets verbs: - get - read-object - write-object --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: $BUCKETrolebinding namespace: $PROJECT roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: $BUCKET-role subjects: - kind: ServiceAccount name: $BUCKET-sa namespace: $PROJECT EOF
- Get the access key ID and key from the secret:
export BUCKET_SECRET=$(kubectl get secret -n $PROJECT -o jsonpath="{range .items[*]}{.metadata.name}{':'}{.metadata.annotations['object\.GDC\.goog/subject']}{'\n'}{end}" | grep $BUCKET | tail -1 | cut -f1 -d :)
echo "access-key-id=$(kubectl get secret -n $PROJECT $BUCKET_SECRET -o jsonpath="{.data['access-key-id']}")"
echo "access-key=$(kubectl get secret -n $PROJECT $BUCKET_SECRET -o jsonpath="{.data['secret-access-key']}")"
The output should look similar to the following:
access-key-id=RFdJMzRROVdWWjFYNTJFTzJaTk0= access-key=U3dSdm5FRU5WdDhMckRMRW1QRGV0bE9MRHpCZ0Ntc0cxVFJQdktqdg==
- Create a secret to be used by the uploader in the user GKE cluster:
- Authenticate with the user GKE cluster:
gdcloud clusters get-credentials USER_CLUSTER
Where USER_CLUSTER is the name of the user GKE cluster.
- Apply the secret configuration:
kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: namespace: $PROJECT name: $BUCKET-secret type: Opaque data: access-key-id: ACCESS_KEY_ID access-key: ACCESS_KEY EOF
Where:
- ACCESS_KEY_ID is the access key ID obtained in the previous step.
- ACCESS_KEY is the access key obtained in the previous step.
- Authenticate with the user GKE cluster:
- Get the storage endpoint, fully qualified domain name (FQDN), and region of the bucket:
- Authenticate with the org admin cluster:
gdcloud clusters get-credentials ORG_ADMIN_CLUSTER
Where ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.
- Get the storage endpoint, fully qualified domain name (FQDN), and region of the bucket:
kubectl get buckets ${BUCKET} -n $PROJECT -o jsonpath="{'endpoint: '}{.status.endpoint}{'\n'}{'bucket: '}{.status.fullyQualifiedName}{'\n'}{'region: '}{.status.region}{'\n'}"
The output should look similar to the following:
endpoint: https://objectstorage.gpu-org.cookie.sesame.street bucket: ez9wo-apigee-backup-bucket region: cookie
- Authenticate with the org admin cluster:
- Update the following values in the
apigee/helm_user_cluster/values-cookie-air-gapped.yaml
file:objectstorekeyname: "apigee-backup-bucket-secret" objectstoreurl: "BUCKET_ENDPOINT" objectstorebucket: "BUCKET_FQDN"
Where:
- BUCKET_ENDPOINT is the endpoint of the bucket obtained in the previous step.
- BUCKET_FQDN is the fully qualified domain name of the bucket obtained in the previous step.
- Update
apigee/helm_user_cluster/values-cookie-air-gapped.yaml
as follows:repo_node: enabled: true apigee_node: enabled: false control_node: enabled: false
Make sure that the
repo_node
is enabled and both theapigee_node
andcontrol_node
are disabled. These nodes are deployed in a later step. - Get credentials for the org admin cluster:
gdcloud clusters get-credentials ORG_ADMIN_CLUSTER
Where ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.
- Create a Python virtual environment:
python3 -m venv venv / source venv/bin/activate
- Run the deploy script to create the repository node:
python apigee/solution_deploy.py gdc-air-gapped
- Configure SSH for the repository node:
export NODE=repo
kubectl create -n $PROJECT -f - <<EOF apiVersion: virtualmachine.GDC.goog/v1 kind: VirtualMachineAccessRequest metadata: generateName: $NODE- spec: ssh: key: | "cat SSH_PUBLIC_KEY_FILE" ttl: 24h user: admin vm: $NODE EOF
Where SSH_PUBLIC_KEY_FILE is the name of the file containing your public SSH key.
- Get the external IP address for the repository node:
kubectl get virtualmachineexternalaccess -n $PROJECT $NODE -ojsonpath='{.status.ingressIP}'
- Get the internal IP address for the repository node:
kubectl get virtualmachines.virtualmachine.gdc.goog -n $PROJECT $NODE -ojsonpath='{.status.network.interfaces[1].ipAddresses[0]}'
apigee-4.53.01
apigee-nginx.tar
postgresql14.tar
fluentbit.tar
ansible-rpms.tar
apigee-repos.tar
- Copy the SSH private key file to the repository node:
scp -i SSH_PRIVATE_KEY_FILE ~/apigee-files/* admin@REPO_EXTERNAL_IP:/tmp
Where:
- SSH_PRIVATE_KEY_FILE is the name of the file containing your private SSH key.
- REPO_EXTERNAL_IP is the external IP address of the repository node obtained in the previous step.
- Upload the folder containing the Fluent Bit configurations to the repository node:
scp -i SSH-PRIVATE-KEY-FILE -r apigee/scripts/fluent-bit admin@REPO_EXTERNAL_IP:/tmp/fluent-bit
Where:
- SSH_PRIVATE_KEY_FILE is the name of the file containing your private SSH key.
- REPO_EXTERNAL_IP is the external IP address of the repository node.
- Copy
apigee/scripts/repo_setup.sh
to the repository node. - In the script, replace REPO_USER and REPO_PASSWORD with the desired username and password for the mirror repository.
- Run the script:
chmod a+x repo_setup.sh
./repo_setup.sh
If you encounter a
No such file or directory
error, rerun the script. - Test the connection to the mirror repository locally from the repository node.
curl http://REPO_USER:REPO_PASSWORD@REPO_INTERNAL_IP:3939/bootstrap_VERSION.sh -o /tmp/bootstrap_VERSION.sh
curl http://REPO_USER:REPO_PASSWORD@REPO_INTERNAL_IP:3939/apigee/release/VERSION/repodata/repomd.xml
Replace VERSION with the Apigee Edge for Private Cloud version you want to install.
- Replace
REPO_INTERNAL_IP
,REPO_USER_NAME
, andREPO_PASSWORD
inapigee/helm/scripts/apigee_setup.sh
with the desired values. - Export the values as environment variables:
export REPO_IP=REPO_INTERNAL_IP
export REPO_USER=REPO_USER_NAME
export REPO_PASSWORD=REPO_PASSWORD
- Enable the
apigee_node
inapigee/helm/values-cookie-air-gapped.yaml
as shown::apigee_node: enabled: true
- Run the deploy script to create the Apigee nodes:
source venv/bin/activate
python apigee/solution_deploy.py gdc-air-gapped
- Authenticate with the org admin cluster:
gdcloud clusters get-credentials ORG_ADMIN_CLUSTER
Where ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.
- Create an SSH key for each node:
for i in 1 2 3 4 5; do kubectl create -n $PROJECT -f - <<EOF apiVersion: virtualmachine.GDC.goog/v1 kind: VirtualMachineAccessRequest metadata: generateName: node$i spec: ssh: key: | "cat SSH_PUBLIC_KEY_FILE" ttl: 24h user: admin vm: node$i EOF done
Where SSH_PUBLIC_KEY_FILE is the name of the file containing your public SSH key.
- Get the external IP addresses for the Apigee nodes:
for i in 1 2 3 4 5; do kubectl get virtualmachineexternalaccess -n $PROJECT node$i -ojsonpath='{.status.ingressIP}' echo done
- Get the internal IP addresses for the Apigee nodes:
for i in 1 2 3 4 5; do kubectl get virtualmachines.virtualmachine.gdc.goog -n $PROJECT node$i -ojsonpath='{.status.network.interfaces[1].ipAddresses[0]}' echo done
- (Optional) Check to see if the startup scripts run successfully on the Apigee nodes:
- SSH to the node and run the following command:
sudo journalctl -u cloud-final -f
- Look for logs similar to the following:
Aug 29 18:17:00 172.20.128.117 cloud-init[1895]: Complete! Aug 29 18:17:00 172.20.128.117 cloud-init[1895]: Finished running the command: . /var/lib/google/startup-scripts/apigee-setup
- SSH to the node and run the following command:
- Replace
REPO_INTERNAL_IP
,REPO_USER_NAME
, andREPO_PASSWORD
inapigee/helm/scripts/control_setup.sh
with the desired values. - Enable the
control_node
inapigee/helm/values-cookie-air-gapped.yaml
as shown:control_node: enabled: true
- Run the deploy script to create the control node:
source venv/bin/activate
python apigee/solution_deploy.py gdc-air-gapped
- Configure SSH access to the control node:
kubectl create -n $PROJECT -f - <<EOF apiVersion: virtualmachine.GDC.goog/v1 kind: VirtualMachineAccessRequest metadata: generateName: control- spec: ssh: key: | "cat SSH_PUBLIC_KEY_FILE" ttl: 24h user: admin vm: control EOF
- Get the external IP address for the control node:
kubectl get virtualmachineexternalaccess -n $PROJECT control -ojsonpath='{.status.ingressIP}'
- Get the internal IP for the control node:
kubectl get virtualmachines.virtualmachine.GDC.goog -n $PROJECT control -ojsonpath='{.status.network.interfaces[1].ipAddresses[0]}'
- SSH to the control node and set up the Ansible environment:
cd /home/admin
cp -r /tmp/apigee-repos .
cd apigee-repos/ansible-opdk-accelerator/setup
- Replace remote git repositories with local file:
sed -i 's/https:\/\/github.com\/carlosfrias/git+file:\/\/\/home\/admin\/apigee-repos/g' requirements.yml
sed -i 's/\.git$//g' requirements.yml
- Install Ansible requirements:
sudo chown -R admin /home/admin/apigee-repos
ansible-galaxy install -r requirements.yml -f
- Update the setup configuration:
- Edit the
main.yml
file:vi ~/apigee-repos/ansible-opdk-accelerator/setup/roles/apigee-opdk-setup-ansible-controller/tasks/main.yml
- Remove the tasks that need GitHub access:
- Git SSH checkout of configuration repositories
- Git HTTPS checkout of configuration repositories
- Run the setup playbook:
cd ~/apigee-repos/ansible-opdk-accelerator/setup
ansible-playbook setup.yml
- Edit the
- Upload the SSH key for the Apigee nodes to the control node:
scp -i CONTROL_SSH_PRIVATE_KEY_FILE APIGEE_NODE_SSH_PRIVATE_KEY_FILE admin@CONTROL_EXTERNAL_IP:/home/admin/.ssh/id_rsa
Where:
- CONTROL_SSH_PRIVATE_KEY_FILE is the name of the file containing your control node's SSH private key.
- APIGEE_NODE_SSH_PRIVATE_KEY_FILE is the name of the file containing your Apigee node's SSH private key.
- CONTROL_EXTERNAL_IP is the external IP address of the control node.
- Create the Ansible inventory config file:
- Copy the content of the
apigee/scripts/ansible/prod.cfg
file to theprod.cfg
file:vi ~/.ansible/multi-planet-configurations/prod.cfg
- Create the
edge-dc1
folder and copy the content of theapigee/scripts/ansible/edge-dc1
file to theedge-dc1
file:mkdir ~/.ansible/inventory/prod
vi ~/.ansible/inventory/prod/edge-dc1
- Copy the content of the
- Update the internal IP addresses of Apigee nodes in
edge-dc1
:apigee_000 ansible_host=APIGEE_NODE1_INTERNAL_IP
apigee_001 ansible_host=APIGEE_NODE2_INTERNAL_IP
apigee_002 ansible_host=APIGEE_NODE3_INTERNAL_IP
apigee_003 ansible_host=APIGEE_NODE4_INTERNAL_IP
apigee_004 ansible_host=APIGEE_NODE5_INTERNAL_IP
Where the values for the APIGEE_NODE*_INTERNAL_IP are the internal IP addresses of the Apigee nodes obtained in an earlier step.
- Configure the
~/.apigee-secure/credentials.yml
file with the following values:- apigee_repo_user: 'APIGEE_REPO_USER'
- apigee_repo_password: 'APIGEE_REPO_PASSWORD'
- opdk_qpid_mgmt_username: 'OPDK_QPID_MGMT_USERNAME'
- opdk_qpid_mgmt_password: 'OPDK_QPID_MGMT_PASSWORD'
Where:
- APIGEE_REPO_USER is the username for the Apigee repository.
- APIGEE_REPO_PASSWORD is the password for the Apigee repository.
- OPDK_QPID_MGMT_USERNAME is the username for the Apigee QPID management server.
- OPDK_QPID_MGMT_PASSWORD is the password for the Apigee QPID management server.
- Add a license file with a valid Apigee Edge for Private Cloud license file. The name of the file must be
license.txt
. - Copy the content of the
~/.apigee-secure/license.txt
file to thelicense.txt
you just created. - Configure the following values in the
~/.apigee/custom-properties.yml
file:- opdk_version: 'OPDK_VERSION'
- apigee_repo_url: 'APIGEE_REPO_URL'
Where:
- OPDK_VERSION is the Apigee Edge for Private Cloud version you want to install.
- APIGEE_REPO_URL is the URL of the Apigee repository.
- Export the configuration file as an environment variable:
export ANSIBLE_CONFIG=~/.ansible/multi-planet-configurations/prod.cfg
- Replace the remote git repositories with local files:
cd ~/apigee-repos/ansible-opdk-accelerator/installations/multi-node/
sed -i 's/https:\/\/github.com\/carlosfrias/git+file:\/\/\/home\/admin\/apigee-repos/g' requirements.yml
sed -i 's/\.git$//g' requirements.yml
- Install the Ansible requirements:
ansible-galaxy install -r requirements.yml -f
- Patch the Ansible roles:
sed -i 's/private_address/inventory_hostname/g' ~/.ansible/roles/apigee-opdk-settings-cassandra/tasks/main.yml
sed -i 's/include/include_tasks/g' ~/.ansible/roles/apigee-opdk-server-self/tasks/main.yml
sed -i 's/include/include_tasks/g' ~/.ansible/roles/apigee-opdk-setup-silent-installation-config/tasks/main.yml
cat << EOF >> ~/.ansible/roles/apigee-opdk-setup-silent-installation-config/templates/response-file-template.conf.j2 QPID_MGMT_USERNAME= opdk_qpid_mgmt_username QPID_MGMT_PASSWORD= opdk_qpid_mgmt_password EOF
sed -i 's/mode: 0700/mode: 0700\n recurse: yes/g' ~/.ansible/roles/apigee-opdk-setup-postgres-config/tasks/main.yml
- Replace the contents of the
install.yml
file with the content inapigee/scripts/ansible/install.yml
. - Run the playbook to install the Apigee components:
ansible-playbook install.yml
- Disable the user's password reset link in the Edge UI. Apigee on GDC air-gapped does not include an SMTP server. Follow the steps in Disable the reset password link in the Edge UI.
- Follow the instructions in
Create Harbor registry instances to create a Harbor instance in the GDC project
dev-apigee
. - Follow the instructions in
Create Harbor projects to create a Harbor project named
apigee
. - Follow the instructions in Configure access control to set up access control for the Harbor project.
- Follow the instructions in Sign into Docker and Helm to configure Docker authentication.
- Update the IP addresses in the
apigee/apigee_user_cluster.toml
file as shown:mgmt-server-proxy = "APIGEE_NODE1_EXTERNAL_IP" router-proxy1 = "APIGEE_NODE2_EXTERNAL_IP" router-proxy2 = "APIGEE_NODE3_EXTERNAL_IP"
Where:
- APIGEE_NODE1_EXTERNAL_IP is the external IP address of the Apigee node1 obtained in an earlier step.
- APIGEE_NODE2_EXTERNAL_IP is the external IP address of the Apigee node2 obtained in an earlier step.
- APIGEE_NODE3_EXTERNAL_IP is the external IP address of the Apigee node3 obtained in an earlier step.
- Place the SSL certificate file (
server.crt
) and key file (server.key
) for configuring HTTPS under theapigee/mgmt-server-proxy
andapigee/router-proxy
folders.To generate self-signed certificates, use the following command:
openssl req -newkey rsa:4096 -x509 -nodes -keyout server.key -new -out server.crt -subj "/CN=*.apigeetest.com" -sha256 -days 365
- Update the value of
SSH-PASSWORD
for the root user in the uploader container in theapigee/uploader/Dockerfile
file:RUN echo 'root:SSH_PASSWORD' | chpasswd
- Get the credentials for the user cluster:
gdcloud clusters get-credentials USER_CLUSTER
Where USER_CLUSTER is the name of the user GKE cluster.
- Run the deploy script to deploy the pods and services:
source venv/bin/activate
python apigee/solution_deploy_user_cluster.py gdc-air-gapped
- Get the external IP addresses of the services:
kubectl get svc -n $PROJECT_ID
- Update
SSH_PASSWORD
,root
, andUPLOADER_EXTERNAL_IP
inapigee/helm/scripts/backup_setup.sh
:sshpass -p SSH_PASSWORD scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null apigee-backup* root@UPLOADER_EXTERNAL_IP:/temp/
Where:
- SSH_PASSWORD is the password for the root user.
- UPLOADER_EXTERNAL_IP is the external IP address of the uploader service obtained in an earlier step.
- Update
FLUENTBIT_EXTERNAL_IP
inapigee/helm/scripts/apigee_setup.sh
:export FLUENTBIT_EXTERNAL_IP=FLUENTBIT_EXTERNAL_IP
- Stop the VMs:
gdcloud clusters get-credentials ORG_ADMIN_CLUSTER
export PROJECT=dev-apigee
for i in 1 2 3 4 5; do gdcloud compute instances stop node$i --project $PROJECT done
- Redeploy the Helm chart:
python apigee/solution_deploy.py gdc-air-gapped
- Start the VMs:
for i in 1 2 3 4 5; do GDCloud compute instances start node$i --project $PROJECT done
- Update the following values in
apigee/scripts/apigee_org_setup.sh
as shown. Update other parameters as needed.IP1=APIGEE_NODE1_INTERNAL_IP VHOST_ALIAS="APIGEE_NODE2_EXTERNAL_IP:9001 APIGEE_NODE3_EXTERNAL_IP:9001"
Where:
- APIGEE_NODE1_INTERNAL_IP is the internal IP address of the Apigee node1 obtained in an earlier step.
- APIGEE_NODE2_EXTERNAL_IP is the external IP address of the Apigee node2 obtained in an earlier step.
- APIGEE_NODE3_EXTERNAL_IP is the external IP address of the Apigee node3 obtained in an earlier step.
- Run the script on node1 to onboard the organization:
chmod a+x apigee_org_setup.sh
./apigee_org_setup.sh
- Get the external IP address of the
apigee-elb
service:gdcloud clusters get-credentials USER_CLUSTER
export PROJECT=dev-apigee
kubectl get svc apigee-elb -n $PROJECT
Where USER_CLUSTER is the name of the user GKE cluster.
This service acts as the endpoint for the Edge UI, Management API, and API proxy.
- APIGEE_ELB_EXTERNAL_IP is the external IP address of the
apigee-elb
service obtained in an earlier step. - ORG_NAME is the name of the Apigee organization.
- ENV_NAME is the name of the Apigee environment.
- Log in to the Edge UI.
- On the API Proxies page, click Create to create a new API proxy.
- In the Proxy details page, enter the following values:
- Proxy type: Select No target.
- Proxy name:
ok
- Base path:
/ok
- Target:
http://APIGEE_ELB_EXTERNAL_IP:9001
- Click Create to create the API proxy.
- Send an HTTP request to
/ok
:curl -i http://APIGEE_ELB_EXTERNAL_IP:9001/ok
- Confirm the response is
200 OK
. - Follow the steps in Create a keystore/truststore and alias
to create a self-signed certificate with the following values:
- KeyStore: myTestKeystore
- KeyAlias: myKeyAlias
- Common Name: apigeetest.com
- Make an API call to create the virtual host named
api.apigeetest.com
:curl -v -H "Content-Type:application/xml" \ -u "opdk@apigee.com:Apigee123!" "http://APIGEE_ELB_EXTERNAL_IP:8080/v1/o/ORG_NAME/e//virtualhosts" \ -d '<VirtualHost name="secure"> <HostAliases> <HostAlias>api.apigeetest.com</HostAlias> </HostAliases> <Interfaces/> <Port>9005</Port> <OCSPStapling>off</OCSPStapling> <SSLInfo> <Enabled>true</Enabled> <ClientAuthEnabled>false</ClientAuthEnabled> <KeyStore>myTestKeystore</KeyStore> <KeyAlias>myKeyAlias</KeyAlias> </SSLInfo> </VirtualHost>'
Where:
- APIGEE_ELB_EXTERNAL_IP is the external IP address of the
apigee-elb
service obtained in an earlier step. - ORG_NAME is the name of the Apigee organization.
- ENV_NAME is the name of the Apigee environment where the virtual host should be created.
- APIGEE_ELB_EXTERNAL_IP is the external IP address of the
- Create an API proxy using the secure virtual host.
- On the routers, configure DNS resolution for virtual hosts:
echo '127.0.0.1 api.apigeetest.com' | sudo tee -a /etc/hosts
- Confirm that the virtual hosts work locally by sending an HTTPS request to the endpoint:
curl https://api.apigeetest.com:9005/ok -v -k
- Configure DNS resolution for the endpoint:
echo 'APIGEE_ELB_EXTERNAL_IP apigeetest.com' | sudo tee -a /etc/hosts
Where APIGEE_ELB_EXTERNAL_IP is the external IP address of the
apigee-elb
service obtained in an earlier step. - Navigate to
https://apigeetest.com/ok
in a web browser and confirm that it works. - Generate a keystore file from the SSL certificate file and key file:
openssl pkcs12 -export -clcerts -in server.crt -inkey server.key -out keystore.pkcs12
keytool -importkeystore -srckeystore keystore.pkcs12 -srcstoretype pkcs12 -destkeystore keystore.jks -deststoretype jks
- Place the keystore file under the Apigee folder on node1:
scp -i SSH_PRIVATE_KEY_FILE keystore.jks admin@APIGEE_NODE1_EXTERNAL_IP:/home/admin/
Where:
- SSH_PRIVATE_KEY_FILE is the name of the file containing your Apigee node's SSH private key.
- APIGEE_NODE1_EXTERNAL_IP is the external IP address of the Apigee node1 obtained in an earlier step.
- SSH to node1 and move the keystore file to the Apigee folder:
sudo mv keystore.jks /opt/apigee/customer/application/
- Create the SSL config file:
sudo vi /tmp/sslConfigFile
- Update the value of
KEY-PASS-PHRASE
as shown:HTTPSPORT=9443 DISABLE_HTTP=n KEY_ALGO=JKS KEY_FILE_PATH=/opt/apigee/customer/application/keystore.jks KEY_PASS=KEY_PASS_PHRASE
- Configure SSL using the config file:
sudo chown apigee:apigee /tmp/sslConfigFile
/opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl -f /tmp/sslConfigFile
- Configure DNS resolution for the Edge UI:
echo 'APIGEE_ELB_EXTERNAL_IP ui.apigeetest.com' | sudo tee -a /etc/hosts
Where APIGEE_ELB_EXTERNAL_IP is the external IP address of the
apigee-elb
service obtained in an earlier step. - Access
https://ui.apigeetest.com:9443
in a web browser and confirm it works. For more details, refer to the guide. - Configure the owner for the keystore file (use the same one as the Edge UI):
sudo chown apigee:apigee /opt/apigee/customer/application/keystore.jks
- Create the properties file:
sudo vi /opt/apigee/customer/application/management-server.properties
- Restart the management server for the changes to take effect:
/opt/apigee/apigee-service/bin/apigee-service edge-management-server restart
- Confirm that HTTPS works locally:
curl -u "opdk@apigee.com:Apigee123!" "https://localhost:8443/v1/users" -k
- From the client, access https://apigeetest.com:8443/v1/users in the browser. Enter the admin username and password to confirm that the credentials are configured correctly.
To learn more about granting GDC air-gapped roles and permissions, see Grant and revoke access.
Limitations
The following limitations apply to Apigee on GDC air-gapped:
Get the required files
To get the installation files, you must first set up a connected node and then download the files.
Set up a connected node
The connected node is a single VM outside of GDC that you use to download the installation files. This VM requires internet access and is only used for the installation process.
The connected node requires the following capacity and configuration:
To create the connected node, follow the instructions in Create and start a VM instance. Once the VM is created, follow the instructions in Connect to Linux VMs. to connect to the VM. See GDC supported operating systems for a list of supported operating systems.
Download installation files
To download the installation files:
Set up the storage bucket
In this step, the GDC operator sets up a storage bucket in the GDC project to store Apigee Edge for Private Cloud backup files.
Create a storage bucket
To create a storage bucket in the GDC project:
Configure bucket access
To configure access to the storage bucket:
Set up the repository node
In this step, the GDC operator sets up a repository node to host the Apigee Edge for Private Cloud mirror repository.
Create a repository node
To create a repository node:
Configure repository node access
To configure access to the repository node:
Upload installation files
In this step, the GDC operator uploads the latest version of the following files to the repository node:
To upload the installation files:
Configure the mirror repository
To configure the mirror repository:
Deploy Apigee nodes
In this step, the GDC operator deploys the Apigee API management nodes.
Create Apigee nodes
To create the Apigee API management nodes:
Configure Apigee node access
Configure access to the Apigee API management nodes:
Set up the control node
In this step, the GDC operator sets up a control node to manage Apigee installations.
Create a control node
To create a control node:
Configure control node access
To configure control node access:
Configure Ansible
In this step, the GDC operator sets up the environment on the control node.
To configure the Ansible environment:
Install the Apigee components
In this step, the GDC operator installs the Apigee components using Ansible.
To install the Apigee components:
Deploy pods and services
In this step, you'll deploy the uploader, reverse proxy, load balancer, and logging pods and services.
To deploy the pods and services:
Update uploader and Fluent Bit forwarder IPs
In this step, you'll update the uploader and Fluent Bit forwarder IPs in the backup and Apigee setup scripts.
Updating the startup script requires restarting the Apigee nodes. To restart the nodes:
Onboard the Apigee organization
In this step, the GDC operator onboards the Apigee organization by running a setup script on node1.
To onboard the Apigee organization:
Test HTTP connectivity
In this step, you'll test HTTP connectivity for the Management API and API proxy.
To test HTTP connectivity:
Test the Management API
To test the Management API, send an HTTP request to the endpoint:
curl -u "opdk@apigee.com:Apigee123!" "http://APIGEE_ELB_EXTERNAL_IP:8080/v1/o/ORG_NAME/e/ENV_NAME/provisioning/axstatus"
Where:
Test the API proxy
To test the API proxy:
Configure TLS and test HTTPS
In this step, the GDC operator configures Transport Layer Security (TLS) for the API proxy, Edge UI, and Management API.
Configure TLS for the API proxy
Configure TLS for the Edge UI
To configure TLS for the Edge UI:
Configure TLS for the Management API
To configure TLS for the Management API:
Replace the value of KEY_PASS_PHRASE
with the keystore password, as shown:
conf_webserver_ssl.enabled=true # Leave conf_webserver_http.turn.off set to false # because many Edge internal calls use HTTP. conf_webserver_http.turn.off=false conf_webserver_ssl.port=8443 conf_webserver_keystore.path=/opt/apigee/customer/application/keystore.jks # Enter the obfuscated keystore password below. conf_webserver_keystore.password=KEY_PASS_PHRASE