Each caching policy you configure can use one of two cache types: an included shared cache that your applications have access to and an environment cache that you create.
- Shared cache -- By default, your proxies have access to a shared cache included for environments you create. The shared cache works well for basic use cases.
You can work with the shared cache only by using caching policies. It can't be managed using the management API. You can have a caching policy use the shared cache by simply omitting the policy's
- Environment cache -- When you want to configure cache properties with values you choose, you can create an environment-scoped cache. For more about creating a cache, see Creating and editing an environment cache.
When you create an environment cache, you configure its default properties. There is no limit to the number of caches you can create. You can have a caching policy use the environment cache by specifying the cache name in the policy's
The maximum size of each cached object is 512 kb.
Both the shared and environment caches are built on a two-level system made up of an in-memory level and a persistent level. Policies interact with both levels as a combined framework -- the relationship between the levels is managed by the system.
- Level 1 is an in-memory cache for fast access. Each message processing node has its own in-memory cache (implemented from Ehcache) for the fastest response to requests.
- On each node, a certain percentage of memory is reserved for use by the cache.
- As the memory limit is reached, the Apigee Edge removes cache entries from memory (though they are still kept in the data store) to ensure that memory remains available for other processes.
- Entries are removed in the order of time since last access, with the oldest entries removed first.
- These caches are also limited by the number of entries in the cache.
- Level 2 is a persistent cache beneath the in-memory cache. All message processing nodes share a cache data store (implemented on Cassandra) for persisting cache entries.
- Cache entries persist here as they're removed from in-memory caches, such as when in-memory limits are reached.
- Because the persistent cache is shared across message processors, cache entries are available regardless of which node receives a request for the cached data.
- This cache is limited in that only entries of a certain size may be cached.
- There is no limit on the number of cache entries. The entries are expired in the persistent cache only on the basis of expiration settings.
The following describes how Apigee Edge handles cache entries as your caching policies do their work.
- When a policy writes to the cache (such as with the PopulateCache policy):
- The Apigee Edge system puts the entry in both the in-memory cache and the persistent cache.
- The system sends all other message processors a request to create a cache entry with the same key, cache name, and value. This ensures that only fresh data is stored.
- When a policy reads from the cache (such as with the LookupCache policy):
- The system looks first at the in-memory cache for the entry. If multiple message processors need requested data at the same time, both will retrieve it from the source.
- If there's no corresponding in-memory entry, the system looks for the entry in the persistent cache.
- If the entry isn't in the persistent cache, fresh data is retrieved from the backend source, then cached.
- When a policy invalidates a cache entry (such as with the InvalidateCache policy):
- The message processing node receiving the request sends a request to invalidate the entry on all other nodes (including routers and management servers).
- If one message processor invalidates a cache entry, the system attempts to broadcast the invalidate event to all message processors.
- If the broadcast event works, each receiving message processor will remove the cached value, then retrieve data from the source the next time it is requested.
- If the notification attempt doesn't succeed, the invalidated cache value will remain in the other in-memory caches. The other message processors will have stale data until the time-to-live expires. After expiration, those message processors will retrieve a new copy of the value.
Through configuration, you can manage some aspects of the cache.
- In-memory (L1) cache. Memory limits for your cache are not configurable -- limits are set by Apigee for each message processor that hosts caches for multiple customers.
In a hosted environment*, where in-memory caches for all customer deployments are hosted across multiple shared message processors, each processor features an Apigee-configurable memory percentage threshold to ensure that caching does not consume all of the application's memory. As the threshold is crossed for a given message processor, cache entries are evicted from memory on a least-recently-used basis. Entries evicted from memory remain in the persistent level of the cache.
- Persistent (L2) cache. There are no limits on the number of entries in the cache. Entries evicted from the in-memory cache remain in the persistent cache in keeping with configurable time-to-live settings.
Individual message processors retrieve data by reading from the data store L2 cache. If data is not in the cache, it's retrieved through a service callout, target endpoint response, and so on.
The following table lists settings you can use to optimize cache performance. You can specify values for these settings when you create a new environment cache, as described in Creating and editing an environment cache.
|Skip if element size in KB exceeds||If an entry exceeds the specified size, it will be skipped (not cached).||This helps prevent caching overly large entries. The maximimum size for a cached object is 512 kb.|
|Expiration||Specifies the time to live for cache entries.|
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?)