Install Apigee on Google Distributed Cloud air-gapped

This guide describes how to install and deploy Apigee Edge for Private Cloud and API proxies in an air-gapped Google Distributed Cloud (GDC) environment. GDC air-gapped offerings, including Apigee Edge for Private Cloud, don't require connectivity to Google Cloud to manage infrastructure and services. You can use a local control plane hosted on your premises for all operations. For an overview of GDC air-gapped, see the overview.

This guide is intended for Apigee operators who are familiar with Apigee Edge for Private Cloud and have a basic understanding of Kubernetes.

Overview of required steps

To install and deployApigee Edge for Private Cloud in an air-gapped GDC environment, the operator must complete the following steps:

Before you begin

Before you begin the installation process, make sure to complete the following steps:

  1. Create a GDC project to use for the installation, if you don't already have one. For more information, see Create a project.
  2. Download, install, and configure the gdcloud CLIon a GDC connected workstation or within your organization's continuous deployment environment.
  3. Get the credentials required to use the gdcloud CLI and kubectl API. See Authenticate your account for access for the required steps.
  4. Confirm the Apigee username and password you received from your Apigee account manager.
  5. Confirm the name of your GKE admin cluster and the name of your GKE user cluster.

Capacity requirements

Installing Apigee Edge for Private Cloud on GDC requires several virtual machines (VMs) with specific resource allocations. These VMs incur charges based on their compute resources (RAM, vCPU cores) and local disk storage. For more information, see Pricing.

The following table shows the resource requirements for each VM:

VM type RAM vCPU cores Disk storage
Repo node 8GB 2-vCPU cores 64GB
Control node 8GB 2-vCPU cores 64GB
Apigee API management nodes 1, 2, and 3 16GB RAM 8-vCPU cores 670 GB
Apigee API management nodes 4 and5 16GB RAM 8-vCPU cores 500 GB - 1TB

Roles and permissions

The following roles and permissions are required to deploy Apigee Edge for Private Cloud in an air-gapped GDC environment:

  • Platform Administrator (PA): Assign the IAM Admin role.
  • Application Operator (AO): Assign the following roles:
    • Harbor Instance Admin: Has full access to manage Harbor instances in a project.
    • LoggingTarget Creator: Creates LoggingTarget custom resources in the project namespace.
    • LoggingTarget Editor: Edits LoggingTarget custom resources in the project namespace.
    • Project Bucket Admin: Manages the storage buckets and objects within buckets
    • Project Grafana Viewer: Accesses the monitoring instance in the project namespace.
    • Project NetworkPolicy Admin: Manages the project network policies in the project namespace.
    • Project VirtualMachine Admin: Manages the virtual machines in the project namespace.
    • Secret Admin: Manages Kubernetes secrets in projects.
    • Service Configuration Admin: Has read and write access to service configurations within a project namespace.
    • Namespace Admin: Manages all resources within project namespaces.
  • To learn more about granting GDC air-gapped roles and permissions, see Grant and revoke access.

    Limitations

    The following limitations apply to Apigee on GDC air-gapped:

    • Apigee on GDC air-gapped does not come with DNS servers and uses local DNS resolution as a workaround. If Apigee on GDC air-gapped is deployed in an environment with external DNS servers, replace the steps that configure local DNS with configuring DNS entries in the DNS servers.
    • Apigee on GDC air-gapped does not include a stand-alone SMTP server. You can configure an SMTP server at any time to enable outbound email notifications for account creation and password resets from the Management Server and Management UI. Management APIs remain available for Apigee user account management. See Configuring the Edge SMTP server for more information.
    • Apigee on GDC air-gapped does not implement intrusion detection and prevention. Install and configure an Intrusion Prevention System (IPS), such as Snort, to detect and prevent malicious activities.

    Get the required files

    To get the installation files, you must first set up a connected node and then download the files.

    Set up a connected node

    The connected node is a single VM outside of GDC that you use to download the installation files. This VM requires internet access and is only used for the installation process.

    The connected node requires the following capacity and configuration:

    • Operating System: Rocky Linux 8
    • Machine size: 8GB RAM; 2-vCPU cores; 64GB local disk storage
    • Connectivity:
      • Ingress: TCP 22 (SSH)
      • Egress: Internet

    To create the connected node, follow the instructions in Create and start a VM instance. Once the VM is created, follow the instructions in Connect to Linux VMs. to connect to the VM. See GDC supported operating systems for a list of supported operating systems.

    Download installation files

    To download the installation files:

    1. Check the Apigee Edge for Private Cloud release notes for the latest official release version supported for GDC, as noted in the Edge for Private Cloud column.
    2. Download the Edge setup file:
      curl https://software.apigee.com/apigee/tarball/VERSION/rocky8/archive.tar -o /tmp/archive.tar -u 'APIGEE_USER:APIGEE_PASSWORD'

      Where:

      • APIGEE_USER is the username you received for the Apigee organization.
      • APIGEE_PASSWORD is the password you received for the Apigee organization.
      • VERSION is the Apigee Edge for Private Cloud release version for use on GDC you intend to install, for example, 4.53.01.
    3. Download the latest Apigee Edge for Private Cloud bootstrap_VERSION.sh file to /tmp/bootstrap_VERSION.sh:
      curl https://software.apigee.com/bootstrap_VERSION.sh -o /tmp/bootstrap_VERSION.sh

      Where VERSION is the latest Apigee Edge for Private Cloud release version for use on GDC you intend to install, for example, 4.53.01.

    4. Install the Edge apigee-service utility and dependencies:
      sudo bash /tmp/bootstrap_VERSION.sh apigeeuser=APIGEE_USER apigeepassword=APIGEE_PASSWORD

      Where:

      • APIGEE_USER is the username you received for the Apigee organization.
      • APIGEE_PASSWORD is the password you received for the Apigee organization.
      • VERSION is the Apigee Edge for Private Cloud release version for use on GDC you intend to install.

    5. Run the setup script on the connected node:
      chmod a+x connected-node_setup.sh \
        ./connected-node_setup.sh

      In this step, the script generates the required files in the following locations (for example, for version 4.53.01):

      • /opt/apigee/data/apigee-mirror/apigee-4.53.01.tar.gz
      • /tmp/apigee-nginx/apigee-nginx.tar
      • /tmp/fluentbit/fluentbit.tar
      • /tmp/postgresql14/postgresql14.tar
      • /tmp/ansible-rpms.tar
      • /tmp/apigee-repos.tar
    6. Transfer the required files from the connected node to a local machine via SSH:
      mkdir apigee-files
      cd apigee-files
      for file in /opt/apigee/data/apigee-mirror/apigee-4.53.01.tar.gz /tmp/ansible-rpms.tar /tmp/apigee-nginx/apigee-nginx.tar /tmp/fluentbit/fluentbit.tar /tmp/postgresql14/postgresql14.tar /tmp/apigee-repos.tar; do
        scp -i SSH_PRIVATE_KEY_FILE USER@CONNECTED_NODE_IP:$file .
        done

      Where:

      • SSH_PRIVATE_KEY_FILE is the path to the SSH private key file.
      • USER is the username for the connected node.
      • CONNECTED_NODE_IP is the IP address of the connected node.

    Set up the storage bucket

    In this step, the GDC operator sets up a storage bucket in the GDC project to store Apigee Edge for Private Cloud backup files.

    Create a storage bucket

    To create a storage bucket in the GDC project:

    1. Authenticate with the org admin cluster:
      gdcloud auth login --login-config-cert WEB_TLS_CERT
      gdcloud clusters get-credentials ORG_ADMIN_CLUSTER

      Where:

      • WEB_TLS_CERT is the path to the web TLS certificate.
      • ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.

    2. Set the project and bucket environment variables:
      export PROJECT=PROJECT
      export BUCKET=BUCKET_NAME

      Where:

      • PROJECT is the name of your GDC project.
      • BUCKET_NAME is the name of the bucket you want to create for storing Apigee Edge for Private Cloud backup files.
    3. Apply the bucket configuration:
      kubectl apply -f - <<EOF
      apiVersion: object.GDC.goog/v1
      kind: Bucket
      metadata:
        name: $BUCKET
        namespace:$PROJECT
      spec:
        description: bucket for Apigee backup files
        storageClass: Standard
        bucketPolicy :
          lockingPolicy :
            defaultObjectRetentionDays: 30
      EOF

      This configuration creates a bucket with a retention period of 30 days.

    Configure bucket access

    To configure access to the storage bucket:

    1. Create a service account in the project:
      gdcloud iam service-accounts create $BUCKET-sa \
          --project=$PROJECT
    2. Create the role and role binding to generate a secret for accessing the bucket:
      kubectl apply -f - <<EOF
      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: $BUCKET-role
        namespace: $PROJECT
      rules:
      -   apiGroups:
        -   object.gdc.goog
        resourceNames:
        -   $BUCKET
        resources:
        -   buckets
        verbs:
        -   get
        -   read-object
        -   write-object
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: $BUCKETrolebinding
        namespace: $PROJECT
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: $BUCKET-role
      subjects:
      -   kind: ServiceAccount
        name: $BUCKET-sa
        namespace: $PROJECT
      EOF
    3. Get the access key ID and key from the secret:
      export BUCKET_SECRET=$(kubectl get secret -n $PROJECT -o jsonpath="{range .items[*]}{.metadata.name}{':'}{.metadata.annotations['object\.GDC\.goog/subject']}{'\n'}{end}" | grep $BUCKET | tail -1 | cut -f1 -d :)
      echo "access-key-id=$(kubectl get secret -n $PROJECT $BUCKET_SECRET -o jsonpath="{.data['access-key-id']}")"
      echo "access-key=$(kubectl get secret -n $PROJECT $BUCKET_SECRET -o jsonpath="{.data['secret-access-key']}")"

      The output should look similar to the following:

      access-key-id=RFdJMzRROVdWWjFYNTJFTzJaTk0=
      access-key=U3dSdm5FRU5WdDhMckRMRW1QRGV0bE9MRHpCZ0Ntc0cxVFJQdktqdg==
    4. Create a secret to be used by the uploader in the user GKE cluster:
      1. Authenticate with the user GKE cluster:
        gdcloud clusters get-credentials USER_CLUSTER

        Where USER_CLUSTER is the name of the user GKE cluster.

      2. Apply the secret configuration:
        kubectl apply -f - <<EOF
        apiVersion: v1
        kind: Secret
        metadata:
          namespace: $PROJECT
          name: $BUCKET-secret
        type: Opaque
        data:
          access-key-id: ACCESS_KEY_ID
          access-key: ACCESS_KEY
        EOF

        Where:

        • ACCESS_KEY_ID is the access key ID obtained in the previous step.
        • ACCESS_KEY is the access key obtained in the previous step.
    5. Get the storage endpoint, fully qualified domain name (FQDN), and region of the bucket:
      1. Authenticate with the org admin cluster:
        gdcloud clusters get-credentials ORG_ADMIN_CLUSTER

        Where ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.

      2. Get the storage endpoint, fully qualified domain name (FQDN), and region of the bucket:
        kubectl get buckets ${BUCKET} -n $PROJECT -o jsonpath="{'endpoint: '}{.status.endpoint}{'\n'}{'bucket: '}{.status.fullyQualifiedName}{'\n'}{'region: '}{.status.region}{'\n'}"

        The output should look similar to the following:

        endpoint: https://objectstorage.gpu-org.cookie.sesame.street
        bucket: ez9wo-apigee-backup-bucket
        region: cookie
    6. Update the following values in the apigee/helm_user_cluster/values-cookie-air-gapped.yaml file:
      objectstorekeyname: "apigee-backup-bucket-secret"
      objectstoreurl: "BUCKET_ENDPOINT"
      objectstorebucket: "BUCKET_FQDN"

      Where:

      • BUCKET_ENDPOINT is the endpoint of the bucket obtained in the previous step.
      • BUCKET_FQDN is the fully qualified domain name of the bucket obtained in the previous step.

    Set up the repository node

    In this step, the GDC operator sets up a repository node to host the Apigee Edge for Private Cloud mirror repository.

    Create a repository node

    To create a repository node:

    1. Update apigee/helm_user_cluster/values-cookie-air-gapped.yaml as follows:
      repo_node:
        enabled: true
      
      apigee_node:
        enabled: false
      
      control_node:
        enabled: false

      Make sure that the repo_node is enabled and both the apigee_node and control_node are disabled. These nodes are deployed in a later step.

    2. Get credentials for the org admin cluster:
      gdcloud clusters get-credentials ORG_ADMIN_CLUSTER

      Where ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.

    3. Create a Python virtual environment:
      python3 -m venv venv /
      source venv/bin/activate
    4. Run the deploy script to create the repository node:
      python apigee/solution_deploy.py gdc-air-gapped

    Configure repository node access

    To configure access to the repository node:

    1. Configure SSH for the repository node:
      export NODE=repo
      kubectl create -n $PROJECT -f - <<EOF
        apiVersion: virtualmachine.GDC.goog/v1
        kind: VirtualMachineAccessRequest
        metadata:
          generateName: $NODE-
        spec:
          ssh:
            key: |
              "cat SSH_PUBLIC_KEY_FILE"
        ttl: 24h
        user: admin
        vm: $NODE
        EOF

      Where SSH_PUBLIC_KEY_FILE is the name of the file containing your public SSH key.

    2. Get the external IP address for the repository node:
      kubectl get virtualmachineexternalaccess -n $PROJECT $NODE -ojsonpath='{.status.ingressIP}'
    3. Get the internal IP address for the repository node:
      kubectl get virtualmachines.virtualmachine.gdc.goog -n $PROJECT $NODE -ojsonpath='{.status.network.interfaces[1].ipAddresses[0]}'

    Upload installation files

    In this step, the GDC operator uploads the latest version of the following files to the repository node:

    • apigee-4.53.01
    • apigee-nginx.tar
    • postgresql14.tar
    • fluentbit.tar
    • ansible-rpms.tar
    • apigee-repos.tar

    To upload the installation files:

    1. Copy the SSH private key file to the repository node:
      scp -i SSH_PRIVATE_KEY_FILE ~/apigee-files/* admin@REPO_EXTERNAL_IP:/tmp

      Where:

      • SSH_PRIVATE_KEY_FILE is the name of the file containing your private SSH key.
      • REPO_EXTERNAL_IP is the external IP address of the repository node obtained in the previous step.
    2. Upload the folder containing the Fluent Bit configurations to the repository node:
      scp -i SSH-PRIVATE-KEY-FILE -r apigee/scripts/fluent-bit admin@REPO_EXTERNAL_IP:/tmp/fluent-bit

      Where:

      • SSH_PRIVATE_KEY_FILE is the name of the file containing your private SSH key.
      • REPO_EXTERNAL_IP is the external IP address of the repository node.

    Configure the mirror repository

    To configure the mirror repository:

    1. Copy apigee/scripts/repo_setup.sh to the repository node.
    2. In the script, replace REPO_USER and REPO_PASSWORD with the desired username and password for the mirror repository.
    3. Run the script:
      chmod a+x repo_setup.sh
      ./repo_setup.sh

      If you encounter a No such file or directory error, rerun the script.

    4. Test the connection to the mirror repository locally from the repository node.
      curl http://REPO_USER:REPO_PASSWORD@REPO_INTERNAL_IP:3939/bootstrap_VERSION.sh -o /tmp/bootstrap_VERSION.sh
      curl http://REPO_USER:REPO_PASSWORD@REPO_INTERNAL_IP:3939/apigee/release/VERSION/repodata/repomd.xml

      Replace VERSION with the Apigee Edge for Private Cloud version you want to install.

    Deploy Apigee nodes

    In this step, the GDC operator deploys the Apigee API management nodes.

    Create Apigee nodes

    To create the Apigee API management nodes:

    1. Replace REPO_INTERNAL_IP, REPO_USER_NAME, and REPO_PASSWORD in apigee/helm/scripts/apigee_setup.sh with the desired values.
    2. Export the values as environment variables:
      export REPO_IP=REPO_INTERNAL_IP
      export REPO_USER=REPO_USER_NAME
      export REPO_PASSWORD=REPO_PASSWORD
    3. Enable the apigee_node in apigee/helm/values-cookie-air-gapped.yaml as shown::
      apigee_node:
        enabled: true
      
    4. Run the deploy script to create the Apigee nodes:
        source venv/bin/activate
        python apigee/solution_deploy.py gdc-air-gapped

    Configure Apigee node access

    Configure access to the Apigee API management nodes:

    1. Authenticate with the org admin cluster:
      gdcloud clusters get-credentials ORG_ADMIN_CLUSTER

      Where ORG_ADMIN_CLUSTER is the name of the org admin GKE cluster.

    2. Create an SSH key for each node:
      for i in 1 2 3 4 5; do
        kubectl create -n $PROJECT -f - <<EOF
        apiVersion: virtualmachine.GDC.goog/v1
        kind: VirtualMachineAccessRequest
        metadata:
          generateName: node$i
        spec:
          ssh:
            key: |
              "cat SSH_PUBLIC_KEY_FILE"
        ttl: 24h
        user: admin
        vm: node$i
        EOF
        done

      Where SSH_PUBLIC_KEY_FILE is the name of the file containing your public SSH key.

    3. Get the external IP addresses for the Apigee nodes:
      for i in 1 2 3 4 5; do
        kubectl get virtualmachineexternalaccess -n $PROJECT node$i -ojsonpath='{.status.ingressIP}'
        echo
        done
    4. Get the internal IP addresses for the Apigee nodes:
      for i in 1 2 3 4 5; do
        kubectl get virtualmachines.virtualmachine.gdc.goog -n $PROJECT node$i -ojsonpath='{.status.network.interfaces[1].ipAddresses[0]}'
        echo
        done
    5. (Optional) Check to see if the startup scripts run successfully on the Apigee nodes:
      1. SSH to the node and run the following command:
        sudo journalctl -u cloud-final -f
      2. Look for logs similar to the following:
        Aug 29 18:17:00 172.20.128.117 cloud-init[1895]: Complete!
        Aug 29 18:17:00 172.20.128.117 cloud-init[1895]: Finished running the command: . /var/lib/google/startup-scripts/apigee-setup

    Set up the control node

    In this step, the GDC operator sets up a control node to manage Apigee installations.

    Create a control node

    To create a control node:

    1. Replace REPO_INTERNAL_IP, REPO_USER_NAME, and REPO_PASSWORD in apigee/helm/scripts/control_setup.sh with the desired values.
    2. Enable the control_node in apigee/helm/values-cookie-air-gapped.yaml as shown:
      control_node:
        enabled: true
      
    3. Run the deploy script to create the control node:
      source venv/bin/activate
      python apigee/solution_deploy.py gdc-air-gapped

    Configure control node access

    To configure control node access:

    1. Configure SSH access to the control node:
      kubectl create -n $PROJECT -f - <<EOF
      apiVersion: virtualmachine.GDC.goog/v1
      kind: VirtualMachineAccessRequest
      metadata:
        generateName: control-
      spec:
        ssh:
          key: |
            "cat SSH_PUBLIC_KEY_FILE"
      ttl: 24h
      user: admin
      vm: control
      EOF
    2. Get the external IP address for the control node:
      kubectl get virtualmachineexternalaccess -n $PROJECT control -ojsonpath='{.status.ingressIP}'
    3. Get the internal IP for the control node:
      kubectl get virtualmachines.virtualmachine.GDC.goog -n $PROJECT control -ojsonpath='{.status.network.interfaces[1].ipAddresses[0]}'

    Configure Ansible

    In this step, the GDC operator sets up the environment on the control node.

    To configure the Ansible environment:

    1. SSH to the control node and set up the Ansible environment:
      cd /home/admin
      cp -r /tmp/apigee-repos .
      cd apigee-repos/ansible-opdk-accelerator/setup
    2. Replace remote git repositories with local file:
      sed -i 's/https:\/\/github.com\/carlosfrias/git+file:\/\/\/home\/admin\/apigee-repos/g'  requirements.yml
      sed -i 's/\.git$//g'  requirements.yml
      
    3. Install Ansible requirements:
        sudo chown -R admin /home/admin/apigee-repos
        ansible-galaxy install -r requirements.yml -f
    4. Update the setup configuration:
      1. Edit the main.yml file:
        vi ~/apigee-repos/ansible-opdk-accelerator/setup/roles/apigee-opdk-setup-ansible-controller/tasks/main.yml
      2. Remove the tasks that need GitHub access:
        • Git SSH checkout of configuration repositories
        • Git HTTPS checkout of configuration repositories
      3. Run the setup playbook:
        cd ~/apigee-repos/ansible-opdk-accelerator/setup
        ansible-playbook setup.yml
    5. Upload the SSH key for the Apigee nodes to the control node:
      scp -i CONTROL_SSH_PRIVATE_KEY_FILE APIGEE_NODE_SSH_PRIVATE_KEY_FILE admin@CONTROL_EXTERNAL_IP:/home/admin/.ssh/id_rsa

      Where:

      • CONTROL_SSH_PRIVATE_KEY_FILE is the name of the file containing your control node's SSH private key.
      • APIGEE_NODE_SSH_PRIVATE_KEY_FILE is the name of the file containing your Apigee node's SSH private key.
      • CONTROL_EXTERNAL_IP is the external IP address of the control node.
    6. Create the Ansible inventory config file:
      1. Copy the content of the apigee/scripts/ansible/prod.cfg file to the prod.cfg file:
        vi ~/.ansible/multi-planet-configurations/prod.cfg
      2. Create the edge-dc1 folder and copy the content of the apigee/scripts/ansible/edge-dc1 file to the edge-dc1 file:
        mkdir ~/.ansible/inventory/prod
        vi ~/.ansible/inventory/prod/edge-dc1
    7. Update the internal IP addresses of Apigee nodes in edge-dc1:
      apigee_000 ansible_host=APIGEE_NODE1_INTERNAL_IP
      apigee_001 ansible_host=APIGEE_NODE2_INTERNAL_IP
      apigee_002 ansible_host=APIGEE_NODE3_INTERNAL_IP
      apigee_003 ansible_host=APIGEE_NODE4_INTERNAL_IP
      apigee_004 ansible_host=APIGEE_NODE5_INTERNAL_IP

      Where the values for the APIGEE_NODE*_INTERNAL_IP are the internal IP addresses of the Apigee nodes obtained in an earlier step.

    8. Configure the ~/.apigee-secure/credentials.yml file with the following values:
      • apigee_repo_user: 'APIGEE_REPO_USER'
      • apigee_repo_password: 'APIGEE_REPO_PASSWORD'
      • opdk_qpid_mgmt_username: 'OPDK_QPID_MGMT_USERNAME'
      • opdk_qpid_mgmt_password: 'OPDK_QPID_MGMT_PASSWORD'

      Where:

      • APIGEE_REPO_USER is the username for the Apigee repository.
      • APIGEE_REPO_PASSWORD is the password for the Apigee repository.
      • OPDK_QPID_MGMT_USERNAME is the username for the Apigee QPID management server.
      • OPDK_QPID_MGMT_PASSWORD is the password for the Apigee QPID management server.

    9. Add a license file with a valid Apigee Edge for Private Cloud license file. The name of the file must be license.txt.
    10. Copy the content of the ~/.apigee-secure/license.txt file to the license.txt you just created.
    11. Configure the following values in the ~/.apigee/custom-properties.yml file:
      • opdk_version: 'OPDK_VERSION'
      • apigee_repo_url: 'APIGEE_REPO_URL'

      Where:

      • OPDK_VERSION is the Apigee Edge for Private Cloud version you want to install.
      • APIGEE_REPO_URL is the URL of the Apigee repository.

    12. Export the configuration file as an environment variable:
      export ANSIBLE_CONFIG=~/.ansible/multi-planet-configurations/prod.cfg

    Install the Apigee components

    In this step, the GDC operator installs the Apigee components using Ansible.

    To install the Apigee components:

    1. Replace the remote git repositories with local files:
      cd ~/apigee-repos/ansible-opdk-accelerator/installations/multi-node/
      sed -i 's/https:\/\/github.com\/carlosfrias/git+file:\/\/\/home\/admin\/apigee-repos/g' requirements.yml
      sed -i 's/\.git$//g'  requirements.yml
    2. Install the Ansible requirements:
      ansible-galaxy install -r requirements.yml -f
    3. Patch the Ansible roles:
      sed -i 's/private_address/inventory_hostname/g' ~/.ansible/roles/apigee-opdk-settings-cassandra/tasks/main.yml
      sed -i 's/include/include_tasks/g' ~/.ansible/roles/apigee-opdk-server-self/tasks/main.yml
      sed -i 's/include/include_tasks/g' ~/.ansible/roles/apigee-opdk-setup-silent-installation-config/tasks/main.yml
      cat << EOF >> ~/.ansible/roles/apigee-opdk-setup-silent-installation-config/templates/response-file-template.conf.j2
      QPID_MGMT_USERNAME= opdk_qpid_mgmt_username
      QPID_MGMT_PASSWORD= opdk_qpid_mgmt_password
      EOF
      sed -i 's/mode: 0700/mode: 0700\n      recurse: yes/g' ~/.ansible/roles/apigee-opdk-setup-postgres-config/tasks/main.yml
    4. Replace the contents of the install.yml file with the content in apigee/scripts/ansible/install.yml.
    5. Run the playbook to install the Apigee components:
      ansible-playbook install.yml
    6. Disable the user's password reset link in the Edge UI. Apigee on GDC air-gapped does not include an SMTP server. Follow the steps in Disable the reset password link in the Edge UI.

    Deploy pods and services

    In this step, you'll deploy the uploader, reverse proxy, load balancer, and logging pods and services.

    To deploy the pods and services:

    1. Follow the instructions in Create Harbor registry instances to create a Harbor instance in the GDC project dev-apigee.
    2. Follow the instructions in Create Harbor projects to create a Harbor project named apigee.
    3. Follow the instructions in Configure access control to set up access control for the Harbor project.
    4. Follow the instructions in Sign into Docker and Helm to configure Docker authentication.
    5. Update the IP addresses in the apigee/apigee_user_cluster.toml file as shown:
      mgmt-server-proxy = "APIGEE_NODE1_EXTERNAL_IP"
      router-proxy1 = "APIGEE_NODE2_EXTERNAL_IP"
      router-proxy2 = "APIGEE_NODE3_EXTERNAL_IP"
      

      Where:

      • APIGEE_NODE1_EXTERNAL_IP is the external IP address of the Apigee node1 obtained in an earlier step.
      • APIGEE_NODE2_EXTERNAL_IP is the external IP address of the Apigee node2 obtained in an earlier step.
      • APIGEE_NODE3_EXTERNAL_IP is the external IP address of the Apigee node3 obtained in an earlier step.

    6. Place the SSL certificate file (server.crt) and key file (server.key) for configuring HTTPS under the apigee/mgmt-server-proxy and apigee/router-proxy folders.

      To generate self-signed certificates, use the following command:

      openssl req -newkey rsa:4096 -x509 -nodes -keyout server.key -new -out server.crt -subj "/CN=*.apigeetest.com" -sha256 -days 365
    7. Update the value of SSH-PASSWORD for the root user in the uploader container in the apigee/uploader/Dockerfile file:
      RUN echo 'root:SSH_PASSWORD' | chpasswd
    8. Get the credentials for the user cluster:
      gdcloud clusters get-credentials USER_CLUSTER

      Where USER_CLUSTER is the name of the user GKE cluster.

    9. Run the deploy script to deploy the pods and services:
      source venv/bin/activate
      python apigee/solution_deploy_user_cluster.py gdc-air-gapped
    10. Get the external IP addresses of the services:
      kubectl get svc -n $PROJECT_ID

    Update uploader and Fluent Bit forwarder IPs

    In this step, you'll update the uploader and Fluent Bit forwarder IPs in the backup and Apigee setup scripts.

    1. Update SSH_PASSWORD, root, and UPLOADER_EXTERNAL_IP in apigee/helm/scripts/backup_setup.sh:
      sshpass -p SSH_PASSWORD scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null apigee-backup* root@UPLOADER_EXTERNAL_IP:/temp/

      Where:

      • SSH_PASSWORD is the password for the root user.
      • UPLOADER_EXTERNAL_IP is the external IP address of the uploader service obtained in an earlier step.

    2. Update FLUENTBIT_EXTERNAL_IP in apigee/helm/scripts/apigee_setup.sh:
      export FLUENTBIT_EXTERNAL_IP=FLUENTBIT_EXTERNAL_IP

    Updating the startup script requires restarting the Apigee nodes. To restart the nodes:

    1. Stop the VMs:
      gdcloud clusters get-credentials ORG_ADMIN_CLUSTER
      export PROJECT=dev-apigee
      for i in 1 2 3 4 5; do
        gdcloud compute instances stop node$i --project $PROJECT
        done
    2. Redeploy the Helm chart:
      python apigee/solution_deploy.py gdc-air-gapped
    3. Start the VMs:
      for i in 1 2 3 4 5; do
        GDCloud compute instances start node$i --project $PROJECT
        done

    Onboard the Apigee organization

    In this step, the GDC operator onboards the Apigee organization by running a setup script on node1.

    To onboard the Apigee organization:

    1. Update the following values in apigee/scripts/apigee_org_setup.sh as shown. Update other parameters as needed.
      IP1=APIGEE_NODE1_INTERNAL_IP
      
      VHOST_ALIAS="APIGEE_NODE2_EXTERNAL_IP:9001 APIGEE_NODE3_EXTERNAL_IP:9001"

      Where:

      • APIGEE_NODE1_INTERNAL_IP is the internal IP address of the Apigee node1 obtained in an earlier step.
      • APIGEE_NODE2_EXTERNAL_IP is the external IP address of the Apigee node2 obtained in an earlier step.
      • APIGEE_NODE3_EXTERNAL_IP is the external IP address of the Apigee node3 obtained in an earlier step.

    2. Run the script on node1 to onboard the organization:
      chmod a+x apigee_org_setup.sh
      ./apigee_org_setup.sh

    Test HTTP connectivity

    In this step, you'll test HTTP connectivity for the Management API and API proxy.

    To test HTTP connectivity:

    1. Get the external IP address of the apigee-elb service:
      gdcloud clusters get-credentials USER_CLUSTER
      export PROJECT=dev-apigee
      kubectl get svc apigee-elb -n $PROJECT

      Where USER_CLUSTER is the name of the user GKE cluster.

      This service acts as the endpoint for the Edge UI, Management API, and API proxy.

    Test the Management API

    To test the Management API, send an HTTP request to the endpoint:

    curl -u "opdk@apigee.com:Apigee123!" "http://APIGEE_ELB_EXTERNAL_IP:8080/v1/o/ORG_NAME/e/ENV_NAME/provisioning/axstatus"

    Where:

    • APIGEE_ELB_EXTERNAL_IP is the external IP address of the apigee-elb service obtained in an earlier step.
    • ORG_NAME is the name of the Apigee organization.
    • ENV_NAME is the name of the Apigee environment.

    Test the API proxy

    To test the API proxy:

    1. Log in to the Edge UI.
    2. On the API Proxies page, click Create to create a new API proxy.
    3. In the Proxy details page, enter the following values:
      • Proxy type: Select No target.
      • Proxy name: ok
      • Base path: /ok
      • Target: http://APIGEE_ELB_EXTERNAL_IP:9001
    4. Click Create to create the API proxy.
    5. Send an HTTP request to /ok:
      curl -i http://APIGEE_ELB_EXTERNAL_IP:9001/ok
    6. Confirm the response is 200 OK.

    Configure TLS and test HTTPS

    In this step, the GDC operator configures Transport Layer Security (TLS) for the API proxy, Edge UI, and Management API.

    Configure TLS for the API proxy

    1. Follow the steps in Create a keystore/truststore and alias to create a self-signed certificate with the following values:
      • KeyStore: myTestKeystore
      • KeyAlias: myKeyAlias
      • Common Name: apigeetest.com
    2. Make an API call to create the virtual host named api.apigeetest.com:
      curl -v -H "Content-Type:application/xml" \
        -u "opdk@apigee.com:Apigee123!" "http://APIGEE_ELB_EXTERNAL_IP:8080/v1/o/ORG_NAME/e//virtualhosts" \
        -d '<VirtualHost  name="secure">
            <HostAliases>
              <HostAlias>api.apigeetest.com</HostAlias>
            </HostAliases>
            <Interfaces/>
            <Port>9005</Port>
            <OCSPStapling>off</OCSPStapling>
            <SSLInfo>
              <Enabled>true</Enabled>
              <ClientAuthEnabled>false</ClientAuthEnabled>
              <KeyStore>myTestKeystore</KeyStore>
              <KeyAlias>myKeyAlias</KeyAlias>
            </SSLInfo>
          </VirtualHost>'

      Where:

      • APIGEE_ELB_EXTERNAL_IP is the external IP address of the apigee-elb service obtained in an earlier step.
      • ORG_NAME is the name of the Apigee organization.
      • ENV_NAME is the name of the Apigee environment where the virtual host should be created.
    3. Create an API proxy using the secure virtual host.
    4. On the routers, configure DNS resolution for virtual hosts:
      echo '127.0.0.1 api.apigeetest.com' | sudo tee -a /etc/hosts
    5. Confirm that the virtual hosts work locally by sending an HTTPS request to the endpoint:
      curl https://api.apigeetest.com:9005/ok -v -k
    6. Configure DNS resolution for the endpoint:
      echo 'APIGEE_ELB_EXTERNAL_IP apigeetest.com' | sudo tee -a /etc/hosts

      Where APIGEE_ELB_EXTERNAL_IP is the external IP address of the apigee-elb service obtained in an earlier step.

    7. Navigate to https://apigeetest.com/ok in a web browser and confirm that it works.

    Configure TLS for the Edge UI

    To configure TLS for the Edge UI:

    1. Generate a keystore file from the SSL certificate file and key file:
      openssl pkcs12 -export -clcerts -in server.crt -inkey server.key -out keystore.pkcs12
      keytool -importkeystore -srckeystore keystore.pkcs12 -srcstoretype pkcs12 -destkeystore keystore.jks -deststoretype jks
    2. Place the keystore file under the Apigee folder on node1:
      scp -i SSH_PRIVATE_KEY_FILE keystore.jks admin@APIGEE_NODE1_EXTERNAL_IP:/home/admin/

      Where:

      • SSH_PRIVATE_KEY_FILE is the name of the file containing your Apigee node's SSH private key.
      • APIGEE_NODE1_EXTERNAL_IP is the external IP address of the Apigee node1 obtained in an earlier step.

    3. SSH to node1 and move the keystore file to the Apigee folder:
      sudo mv keystore.jks /opt/apigee/customer/application/
    4. Create the SSL config file:
      sudo vi /tmp/sslConfigFile
    5. Update the value of KEY-PASS-PHRASE as shown:
      HTTPSPORT=9443
      DISABLE_HTTP=n
      KEY_ALGO=JKS
      KEY_FILE_PATH=/opt/apigee/customer/application/keystore.jks
      KEY_PASS=KEY_PASS_PHRASE
    6. Configure SSL using the config file:
      sudo chown apigee:apigee /tmp/sslConfigFile
      /opt/apigee/apigee-service/bin/apigee-service edge-ui configure-ssl -f /tmp/sslConfigFile
    7. Configure DNS resolution for the Edge UI:
      echo 'APIGEE_ELB_EXTERNAL_IP ui.apigeetest.com' | sudo tee -a /etc/hosts

      Where APIGEE_ELB_EXTERNAL_IP is the external IP address of the apigee-elb service obtained in an earlier step.

    8. Access https://ui.apigeetest.com:9443 in a web browser and confirm it works. For more details, refer to the guide.

    Configure TLS for the Management API

    To configure TLS for the Management API:

    1. Configure the owner for the keystore file (use the same one as the Edge UI):
      sudo chown apigee:apigee /opt/apigee/customer/application/keystore.jks
    2. Create the properties file:
      sudo vi /opt/apigee/customer/application/management-server.properties
    3. Replace the value of KEY_PASS_PHRASE with the keystore password, as shown:

      conf_webserver_ssl.enabled=true
      # Leave conf_webserver_http.turn.off set to false
      # because many Edge internal calls use HTTP.
      conf_webserver_http.turn.off=false
      conf_webserver_ssl.port=8443
      conf_webserver_keystore.path=/opt/apigee/customer/application/keystore.jks
      # Enter the obfuscated keystore password below.
      conf_webserver_keystore.password=KEY_PASS_PHRASE
    4. Restart the management server for the changes to take effect:
      /opt/apigee/apigee-service/bin/apigee-service edge-management-server restart
    5. Confirm that HTTPS works locally:
      curl -u "opdk@apigee.com:Apigee123!" "https://localhost:8443/v1/users" -k
    6. From the client, access https://apigeetest.com:8443/v1/users in the browser. Enter the admin username and password to confirm that the credentials are configured correctly.

    What's next