When you set up your Kubernetes environment, Apigee recommends that you create two dedicated node pools, according to the requirements specified in Kubernetes cluster requirements. These separate node pools support a mix of stateful and stateless services that run in the hybrid runtime services.
For example, Cassandra is the runtime data store used to provide persistence for the key management system (KMS), key value maps (KVM), quota, and cache features in Edge. As such, Cassandra is deployed as a stateful service and this requires a dedicated, stateful node pool. The Message Processor, on the other hand, is a stateless service.
GKE node pool configuration
In GKE, node pools must have a unique name, and GKE automatically labels each node with the following:
To configure the node pool to be used when installing hybrid
runtime services, use this example configuration, where the node
pools are named
cassandra: nodeSelector: key: cloud.google.com/gke-nodepool value: apigee-data mp: nodeSelector: key: cloud.google.com/gke-nodepool value: apigee-runtime
For more information, see Adding and managing node pools in the GKE documentation.
CNCF-conformant Kubernetes configuration
If you are using a CNCF-conformant version of Kubernetes (but
not GKE or OpenShift), each Kubernetes worker node needs to be
specifically tagged. The following examples show
kubectl commands that explicitly tag
nodes, where the node pools are named
kubectl label node ip-10-50-99-225.ec2.internal node-pool=apigee-data
kubectl label node ip-10-50-56-83.ec2.internal node-pool=apigee-runtime
With the above labeled nodes, you can configure the installation the use those label/values by
setting them in
overrides.yaml, as the following example shows:
cassandra: nodeSelector: key: node-pool value: apigee-data mp: nodeSelector: key: node-pool value: apigee-runtime