The purpose of this document is to provide a set of standards and best practices for developing with Apigee Edge. The topics that are covered here include design, coding, policy use, monitoring, and debugging. The information has been gathered by the experience of developers working with Apigee to implement successful API programs. This is a living document and will be updated from time to time.
In addition to the guidelines here, you might also find the Apigee Edge Antipatterns Community post useful.
Comments and Documentation
- Provide inline comments in the ProxyEndpoint and TargetEndpoint configurations. Comments enhance readability for a Flow, especially where policy file names are not sufficiently descriptive to express the underlying functionality of the Flow.
- Make comments useful. Avoid obvious comments.
- Use consistent indentation, spacing, vertical alignment, etc.
Framework-style coding involves storing API proxy resources in your own version control system for reuse across local development environments. For example, to reuse a policy, store it in source control so that developers can sync to it and use it in their own proxy development environments.
- To enable DRY ("don't repeat yourself"), where possible, policy configurations and scripts
should implement specialized, reusable functions. For example, a dedicated policy to extract
query parameters from request messages could be called
ExtractVariables.ExtractRequestParameters. A dedicated policy to inject CORS headers could be called
AssignMessage.SetCORSHeaders. Those policies could then be stored in your source control system and added to each API proxy that needs to extract parameters or set CORS headers, without requiring you to create redundant (and hence less manageable) configurations.
- The Policy
nameattribute and the XML policy file name must be identical.
- The Script and ServiceCallout policy
nameattribute and the name of the resource file should be identical.
DisplayNameshould accurately describe the policy’s function to someone who has never worked with that API proxy before..
- Name policies according to their function, for example
- Use proper extensions for resource files,
.pyfor python, and
.jarfor Java JAR files.
- Variable names should be consistent. If you choose a style, such as camelCase or under_score, use it throughout the API proxy.
- Use variable prefixes, where possible, to organize variables based on their purpose, for
API proxy development
Initial Design Considerations
- Construct Flows in an organized manner. Multiple Flows, each with a single condition, are preferable to multiple conditional attachments to the same PreFlow and Postflow.
- As a 'failsafe', create a default API proxy with a ProxyEndpoint BasePath of
/. This can be used to redirect base API requests to a developer site, to return a custom response, or perform another action more useful than returning the default
- Use TargetServer resources to decouple TargetEndpoint configurations from concrete URLs,
supporting promotion across environments.
See Load balancing across backend servers.
- If you have multiple RouteRules, create one as the 'default', that is, as a RouteRule with
no condition. Ensure that the default RouteRule is defined last in the list of conditional
Routes. RouteRules are evaluated top-down in ProxyEndpoint.
See API proxy configuration reference
- API proxy bundle size: API proxy bundles cannot be larger than 15MB. In Apigee Edge for
Private Cloud, you can change the size limitation by modifying the
thrift_framed_transport_size_in_mbproperty in the following locations: cassandra.yaml (in Cassandra) and conf/apigee/management-server/repository.properties.
- API versioning: For Apigee's thoughts and recommendations on API versioning, see Versioning in the Web API Design: The Missing Link e-book.
Before publishing your APIs, you'll need to enable CORS on your API proxies to support client-side cross-origin requests.
For information about enabling CORS on your API proxies before publishing the APIs, see Adding CORS support to an API proxy.
Message payload size
To prevent memory issues in Edge, message payload size is restricted to 10MB. Exceeding those
sizes results in a
This issue is also discussed in this Apigee Community post.
Following are the recommended strategies for handling large message sizes in Edge:
- Stream requests and responses. Note that when you stream, policies no longer have access to the message content. See Streaming requests and responses.
- In Edge for Private Cloud version 4.15.07 and earlier, edit the message processor
http.propertiesfile to increase the limit in the
HTTPResponse.body.buffer.limitparameter. Be sure to test before deploying the change to production.
In Edge for Private Cloud version 4.16.01 and later, requests with a payload must include the Content-Length header, or in case of streaming the "Transfer-Encoding: chunked" header. For a POST to an API proxy with an empty payload, you must pass a Content-Length of 0.
- In Edge for Private Cloud version 4.16.01 and later, set the following properties in
/opt/apigee/router.properties or message-processor.properties to change the limits. See
the message size limit on the Router or Message Processor for more.
Both properties have a default value of "10m" corresponding to 10MB:
- Leverage FaultRules to handle all fault handling. (RaiseFault policies are used to stop message Flow and send processing to the FaultRules Flow.)
- Within the FaultRules Flow, use AssignMessage policies to build the fault response, not RaiseFault policies. Conditionally execute AssignMessage policies based on the fault type that occurs.
- Always includes a default 'catch-all' fault handler so that system-generated faults can be mapped to customer-defined fault response formats.
- If possible, always make fault responses match any standard formats available in your company or project.
- Use meaningful, human-readable error messages that suggest a solution to the error condition.
See Handling faults.
For industry best practices, see RESTful error response design.
- Use key/value maps only for limited data sets. They are not designed to be a long-term data store.
- Consider performance when using key/value maps as this information is stored in the Cassandra database.
- Do not populate the response cache if the response is not successful or if the request is not a GET. Creates, updates, and deletes should not be cached. <SkipCachePopulation>response.status.code != 200 or request.verb != “GET”</SkipCachePopulation>
- Populate cache with a single consistent content type (for example, XML or JSON). After retrieving a responseCache entry, then convert to the needed content type with JSONtoXML or XMLToJSON. This will prevent storing double, triple, or more data.
- Ensure that the cache key is sufficient to the caching requirement. In many cases, the
request.querystringcan be used as the unique identifier.
- Do not include the API key (
client_id) in the cache key, unless explicitly required. Most often, APIs secured only by a key will return the same data to all clients for a given request. It is inefficient to store the same value for a number of entries based on the API key.
- Set appropriate cache expiration intervals to avoid dirty reads.
- Whenever possible, try to have the response cache policy that populates the cache execute
at the ProxyEndpoint response PostFlow as late as possible. In other words, have it execute
between JSON and XML. By caching mediated data, you avoid the performance cost of executing the
mediation step each time you retrieve cached data.
Note that you might want to instead cache unmediated data if mediation results in a different response from request to request.
- The response cache policy to lookup the cache entry should occur in the ProxyEndpoint request PreFlow. Avoid implementing too much logic, other than cache key generation, before returning a cache entry. Otherwise, the benefits of caching are minimized.
- In general, you should always keep the response cache lookup as close to the client request as possible. Conversely, you should keep the response cache population as close to the client response as possible.
- When using multiple, different response cache policies in a proxy, follow these guidelines
to ensure discrete behavior for each:
- Execute each policy based on mutually exclusive conditions. This will help ensure that only one of multiple response cache policies executes.
- Define different cache resources for each response cache policy. You specify the cache resource in the policy's <CacheResource> element.
Policy and custom code
Policy or custom code?
target.urlfor many different URI combinations).
- Complex payload parsing such as iterating through a JSON object and Base64 encoding/decoding.
- Include Java source files in source code tracking.
See Convert the response to uppercase with a Java callout and Java Callout policy for information on using Java in API proxies.
- Do not use Python unless absolutely required. Python scripts can introduce performance bottlenecks for simple executions, as it is interpreted at runtime.
- Use a global try/catch, or equivalent.
- Throw meaningful exceptions and catch these properly for use in fault responses.
- Throw and catch exceptions early. Do not use the global try/catch to handle all exceptions.
- Perform null and undefined checks, when necessary. An example of when to do this is when retrieving optional flow variables.
- Avoid making HTTP/S requests inside of a script callout. Instead, use the Apigee ServiceCallout policy as the policy handles connections gracefully.
- When accessing message payloads, try to use
context.getRequestMessage. This ensures that the code can retrieve the payload, in both request and response flows.
- Import libraries to the Apigee Edge organization or environment and do not include these in the JAR file. This reduces the bundle size and will let other JAR files access the same library repository.
- Import JAR files using the Apigee resources API rather than including them inside the API proxy resources folder. This will reduce deployment times and allow the same JAR files to be referenced by multiple API proxies. Another benefit is class loader isolation.
- Do not use Java for resource handling (for example, creating and managing thread pools).
- Throw meaningful exceptions and catch these properly for use in Apigee fault responses
See Python Script policy.
- There are many valid use cases for using proxy chaining, where you use a service callout in
one API proxy to call another API proxy. If you use proxy chaining, be sure to avoid "infinite
loop" recursive callouts back into the same API proxy.
If you're connecting between proxies that are in the same organization and environment, be sure to see Chaining API proxies together for more on implementing a local connection that avoids unnecessary network overhead.
- Build a ServiceCallout request message using the AssignMessage policy, and populate the request object in a message variable. (This includes setting the request payload, path, and method.)
- The URL that is configured within the policy requires the protocol specification, meaning
the protocol portion of the URL,
https://for example, cannot be specified by a variable. Also, you must use separate variables for the domain portion of the URL and for the rest of the URL. For example:
- Store the response object for a ServiceCallout in a separate message variable. You can then parse the message variable and keeps the original message payload intact for use by other policies.
- For better performance, look up apps by
uuidinstead of app name.
See Access Entity policy.
- Use a common syslog policy across bundles and within the same bundle. This will keep a consistent logging format.
Cloud customers are not required to check individual components of Apigee Edge (Routers, Message Processors, etc.). Apigee’s Global Operations team is thoroughly monitoring all of the components, along with API health checks, given health check requests by the customer.
Analytics can provide non-critical API monitoring as error percentages are measured.
See Analytics dashboards.
The trace tool in the API Edge management UI is useful for debugging runtime API issues, during development or production operation of an API.
See Using the Trace tool.
- Use IP address restriction policies to limit access to your test environment. Whitelist the IP addresses of your development machines or environments. Access Control policy.
- Always apply content protection policies (JSON and or XML) to API proxies that are deployed to production. JSON Threat Protection policy.
- See the following topics for more security best practices: