Edge for Private Cloud v. 4.17.05
Apigee provides test scripts that you can use to validate your installation.
Run the validation tests
Each step of the validation testing process returns an HTTP 20X response code for a successful test.
To run the test scripts:
- Install apigee-validate on
a Management Server node:
> /opt/apigee/apigee-service/bin/apigee-service apigee-validate install - Run the setup command on a Management Server node to invoke the test scripts:
> /opt/apigee/apigee-service/bin/apigee-service apigee-validate setup -f configFile
The configFile file must contain the following property:
APIGEE_ADMINPW=sysAdminPword
If omitted, you will be prompted for the password.
By default, the apigee-validate utility creates a virtual host on the Router that uses port 59001. If that port is not open on the Router, you can optionally include the VHOST_PORT property in the config file to set the port. For example:
VHOST_PORT=9000 - The script then does the following:
- Creates an organization and associates it with the pod.
- Creates an environment and associates the Message Processor with the environment.
- Creates a virtual host.
- Imports a simple health check proxy and deploys the application to the “test” environment.
- Import the SmartDocs proxy.
- Executes the test to make sure everything is working as expected.
A successful test returns the 20X HTTP response.
To remove the organization, environment and other artifacts created by the test scripts:
- Run the following command:
> /opt/apigee/apigee-service/bin/apigee-service apigee-validate clean -f configFile
where configFile is the same file you used to run the tests.
Note: If you get errors from the testing and the troubleshooting methodology, contact Apigee Support and provide the error log.
Verify pod installation
Now that you have installed the Apigee Analytics, it is recommended that you perform following basic but important validation:
- Verify that the Management Server is in the central POD. On Management Server, run the
following CURL command:
> curl -u sysAdminEmail:password http://localhost:8080/v1/servers?pod=central
You should see output in the form:
[ {
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ ]
},
"type" : [ "application-datastore", "scheduler-datastore", "management-server", "auth-datastore", "apimodel-datastore", "user-settings
datastore", "audit-datastore" ],
"uUID" : "d4bc87c6-2baf-4575-98aa-88c37b260469"
}, {
"externalHostName" : "localhost",
"externalIP" : "192.168.1.11",
"internalHostName" : "localhost",
"internalIP" : "192.168.1.11",
"isUp" : true,
"pod" : "central",
"reachable" : true,
"region" : "dc-1",
"tags" : {
"property" : [ {
"name" : "started.at",
"value" : "1454691312854"
}, ... ]
},
"type" : [ "qpid-server" ],
"uUID" : "9681202c-8c6e-4da1-b59b-23e3ef092f34"
} ] - Verify that the Router and Message Processor are in gateway POD. On Management Server, run
the following CURL command:
> curl -u sysAdminEmail:password http://localhost:8080/v1/servers?pod=gateway
You see output similar to the central pod but for the Router and Message Processor. - Verify that Postgres is in the analytics POD. On Management Server, run the following CURL
command:
> curl -u sysAdminEmail:password http://localhost:8080/v1/servers?pod=analytics
You see output similar to the central pod but for Postgres.