Send Docs Feedback

Cache internals

This topic describes the workings of the cache beneath policies such as the Populate Cache policy, LookupCache policy, InvalidateCache policy, and Response Cache policy.

Shared and environment caches

Each caching policy you configure can use one of two cache types: an included shared cache that your applications have access to and an environment cache that you create.

  • Shared cache -- By default, your proxies have access to a shared cache included for environments you create. The shared cache works well for basic use cases.

    While you can't configure properties of the cache itself, you can configure policies that access the cache so that they override some cache properties, such as cache entry expiration (time to live). You can work with the shared cache only by using caching policies. It can't be managed using the management API. You can have a caching policy use the shared cache by simply omitting the policy's <CacheResource> element.

  • Environment cache -- When you want to configure cache properties with values you choose, you can create an environment-scoped cache. For more about creating a cache, see Creating and editing an environment cache.

    When you create an environment cache, you configure its default properties. There is no limit to the number of caches you can create. You can have a caching policy use the environment cache by specifying the cache name in the policy's <CacheResource> element.

The maximum size of each cached object is 512 kb.

In-memory and persistent cache levels

Both the shared and environment caches are built on a two-level system made up of an in-memory level and a persistent level. Policies interact with both levels as a combined framework -- the relationship between the levels is managed by the system.

  • Level 1 is an in-memory cache for fast access. Each message processing node has its own in-memory cache (implemented from Ehcache) for the fastest response to requests.
    • On each node, a certain percentage of memory is reserved for use by the cache.
    • As the memory limit is reached, the Apigee Edge removes cache entries from memory (though they are still kept in the data store) to ensure that memory remains available for other processes.
    • Entries are removed in the order of time since last access, with the oldest entries removed first.
    • These caches are also limited by the number of entries in the cache.
  • Level 2 is a persistent cache beneath the in-memory cache. All message processing nodes share a cache data store (implemented on Cassandra) for persisting cache entries.
    • Cache entries persist here as they're removed from in-memory caches, such as when in-memory limits are reached.
    • Because the persistent cache is shared across message processors, cache entries are available regardless of which node receives a request for the cached data.
    • This cache is limited in that only entries of a certain size may be cached.
    • There is no limit on the number of cache entries. The entries are expired in the persistent cache only on the basis of expiration settings.
For HIPAA (Health Insurance Portability and Accountability Act) and Payment Card Industry (PCI) organizations, caching is in-memory only.
You might also be interested in Apigee Edge Caching In Detail, on the Apigee community.

How policies use the cache

The following describes how Apigee Edge handles cache entries as your caching policies do their work.

  1. When a policy writes to the cache (such as with the PopulateCache policy):
    1. The Apigee Edge system puts the entry in both the in-memory cache and the persistent cache.
    2. The system sends all other message processors a request to create a cache entry with the same key, cache name, and value. This ensures that only fresh data is stored.
  2. When a policy reads from the cache (such as with the LookupCache policy):
    1. The system looks first at the in-memory cache for the entry. If multiple message processors need requested data at the same time, both will retrieve it from the source.
    2. If there's no corresponding in-memory entry, the system looks for the entry in the persistent cache.
    3. If the entry isn't in the persistent cache, fresh data is retrieved from the backend source, then cached.
  3. When a policy invalidates a cache entry (such as with the InvalidateCache policy):
    1. The message processing node receiving the request sends a request to invalidate the entry on all other nodes (including routers and management servers).
    2. If one message processor invalidates a cache entry, the system attempts to broadcast the invalidate event to all message processors.
      • If the broadcast event works, each receiving message processor will remove the cached value, then retrieve data from the source the next time it is requested.
      • If the notification attempt doesn't succeed, the invalidated cache value will remain in the other in-memory caches. The other message processors will have stale data until the time-to-live expires. After expiration, those message processors will retrieve a new copy of the value.

Managing cache limits

Through configuration, you can manage some aspects of the cache.

The in-memory overall maximum is limited by system resources, and is not configurable. The overall capacity of the persistent cache is effectively unlimited, though the maximum size for each cached object is 512 kb.
  • In-memory (L1) cache. Memory limits for your cache are not configurable -- limits are set by Apigee for each message processor that hosts caches for multiple customers.

    In a hosted environment*, where in-memory caches for all customer deployments are hosted across multiple shared message processors, each processor features an Apigee-configurable memory percentage threshold to ensure that caching does not consume all of the application's memory. As the threshold is crossed for a given message processor, cache entries are evicted from memory on a least-recently-used basis. Entries evicted from memory remain in the persistent level of the cache.

  • Persistent (L2) cache. There are no limits on the number of entries in the cache. Entries evicted from the in-memory cache remain in the persistent cache in keeping with configurable time-to-live settings.

    Individual message processors retrieve data by reading from the data store L2 cache. If data is not in the cache, it's retrieved through a service callout, target endpoint response, and so on.

* In Edge for Private Cloud, you have finer-grained control over memory used for caching, including the maximum amount available. Although the settings can be changed, it's typically not necessary to alter the default configuration.

Configurable optimizations

The following table lists settings you can use to optimize cache performance. You can specify values for these settings when you create a new environment cache, as described in Creating and editing an environment cache.

Note that you can also override cache-level settings in policy configuration. For example, you can configure the cache to skip caching items larger than 512 KB, then configure a Populate Cache policy or Response Cache policy to skip caching at 256 KB.

Setting Description Notes Can be overridden in policy configuration
Maximum elements in memory Deprecated. Specifies the number of in-memory cache entries after which entries will be evicted (though remain in the persistent level). This setting will be removed in a future release. Yes
Minimum size in kilobytes Specifies the entry size above which the entry will be compressed before being added to the cache. This helps reduce quantity in the cache. Yes
Skip if element size in KB exceeds Specifies the entry size past which the entry will be be cached This helps prevent caching overly large entries. The maximimum size for a cached object is 512 kb. Yes
Expiration Specifies the time to live for cache entries.   Yes

 

Help or comments?