Send Docs Feedback

Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.

Concurrent Rate Limit policy

 

What

Throttles inbound connections from your API proxies running on Apigee Edge to your backend services.

Need help deciding which rate limiting policy to use? See Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies.

Where

This policy has specific placement requirements. It must be attached in the on both the Request and Response flows in the Target Endpoint. In addition, it is recommended that you place it in a DefaultFaultRule (this ensures that, in the event of an error, the rate limit counters are maintained correctly). A validation error will be thrown at deployment time if the policy is attached to any flows in the Proxy Endpoint. For details, see Usage notes.

ProxyEndpoint TargetEndpoint
    PreFlow Flow PostFlow PreFlow Flow PostFlow    
Request              
              Response
    PostFlow Flow PreFlow PostFlow Flow PreFlow    

Sample

<ConcurrentRatelimit name="ConnectionThrottler" >
   <AllowConnections count="200" ttl="5" />
   <Distributed>true</Distributed>
   <StrictOnTtl>false</StrictOnTtl>
   <TargetIdentifier name="MyTargetEndpoint"  ref="header/qparam/flow variables" /> 
</ConcurrentRatelimit>

Element reference

The element reference describes the elements and attributes of the ConcurrentRatelimit policy.

<ConcurrentRatelimit async="false" continueOnError="false" enabled="true" name="Concurrent-Rate-Limit-1">
   <DisplayName>Concurrent Rate Limit 1</DisplayName>
   <AllowConnections count="200" ttl="5"/>
   <Distributed>true</Distributed>
   <StrictOnTtl>false</StrictOnTtl>
   <TargetIdentifier name="default"></TargetIdentifier> 
</ConcurrentRatelimit>

<ConcurrentRatelimit> attributes

<ConcurrentRatelimit async="false" continueOnError="false" enabled="true" name="Concurrent-Rate-Limit-1">

The following attributes are common to all policy parent elements.

Attribute Description Default Presence
name

The internal name of the policy. Characters you can use in the name are restricted to: A-Z0-9._\-$ %. However, the Edge management UI enforces additional restrictions, such as automatically removing characters that are not alphanumeric.

Optionally, use the <DisplayName> element to label the policy in the management UI proxy editor with a different, natural-language name.

N/A Required
continueOnError

Set to false to return an error when a policy fails. This is expected behavior for most policies.

Set to true to have flow execution continue even after a policy fails.

false Optional
enabled

Set to true to enforce the policy.

Set to false to "turn off" the policy. The policy will not be enforced even if it remains attached to a flow.

true Optional
async

This attribute is deprecated.

false Deprecated

<DisplayName> element

Use in addition to the name attribute to label the policy in the management UI proxy editor with a different, natural-language name.

<DisplayName>Policy Display Name</DisplayName>
Default:

N/A

If you omit this element, the value of the policy's name attribute is used.

Presence: Optional
Type: String

 

<AllowConnections> element

Provides the number of concurrent connections between Apigee Edge and a backend service that are allowed at any given time.

<AllowConnections count="200" ttl="5"/>
Default: N/A
Presence: Optional
Type: N/A

Attributes

Attribute Description Default Presence
count Specifies the number of concurrent connections between Apigee Edge and a backend service that are allowed at any given time. N/A Optional
ttl

Include to automatically decrement the counter after the number of seconds specified. This can help to clean up any connections that were not decremented properly in the response path. 

N/A Optional

<Distributed> element

Specify whether counter values are shared across instances of Apigee Edge's server infrastructure.

<Distributed>true</Distributed>
Default: false
Presence: Optional
Type: Boolean

<StrictOnTtl> element

Set to true to honor the <AllowConnections> ttl attribute setting regardless of backend server throughput. Consider setting this property to true for high throughput or low latency backend services.

<StrictOnTtl>false</StrictOnTtl>
Default: false
Presence: Optional
Type: Boolean

<TargetIdentifier> element

Provides the name of the TargetEndpoint to which the throttling should be applied.

<TargetIdentifier name="default"></TargetIdentifier>
Default: N/A
Presence: Optional
Type: N/A

Attributes

Attribute Description Default Presence
name Specifies the name of the TargetEndpoint to which the throttling should be applied. N/A Optional
ref

 

N/A Optional

Flow variables

A set of predefined flow variables are populated each time the policy executes:

  • concurrentratelimit.{policy_name}.allowed.count
  • concurrentratelimit.{policy_name}.used.count
  • concurrentratelimit.{policy_name}.available.count
  • concurrentratelimit.{policy_name}.identifier

Error codes

This section describes the error messages and flow variables that are set when this policy triggers an error. This information is important to know if you are developing fault rules for a proxy. To learn more, see What you need to know about policy errors and Fault handling.

Error code prefix

policies.concurrentratelimit (What's this?)

Runtime errors

These errors can occur when the policy executes.

Error name HTTP status Occurs when
ConcurrentRatelimtViolation 503

ConcurrentRatelimit connection exceeded. Connection limit : {0}

Note: The Error code shown on the right is correct, although it contains a misspelling ("limt"). Please be sure to use the code exactly as shown here when creating fault rules to trap this error. 

Deployment errors

Error name Occurs when
InvalidCountValue ConcurrentRatelimit invalid count value specified.
ConcurrentRatelimitStepAttachment\
NotAllowedAtProxyEndpoint
Concurrent Ratelimit policy {0} attachment is not allowed at proxy request/response/fault paths. This policy must be placed on the Target Endpoint.
ConcurrentRatelimitStepAttachment\
MissingAtTargetEndpoint
Concurrent Ratelimit policy {0} attachment is missing at target request/response/fault paths. This policy must be placed in the Target Request Preflow, Target Response Postflow, and DefaultFaultRule.
InvalidTTLForMessageTimeOut ConcurrentRatelimit invalid ttl value specified for message timeout. It must be a positive integer. 

 

Fault variables

These variables are set when this policy triggers an error. For more information, see What you need to know about policy errors.

Variables set (Learn more) Where Example
[prefix].[policy_name].failed The fault variable [prefix] is concurrentratelimit.
The [policy_name] is the name of the policy that threw the error.
concurrentratelimit.CRL-RateLimitPolicy.failed = true
fault.[error_name] [error_name] = The specific error name to check for as listed in the table above.

fault.name Matches "ConcurrentRatelimtViolation"

Note: The Error code shown in the example is correct, although it contains a misspelling ("limt"). Please be sure to use the code exactly as shown here when creating fault rules to trap this error. 

Example error response

If the rate limit is exceeded, the policy returns only an HTTP status 503 to the client.

Example fault rule

<FaultRules>
    <FaultRule name="Quota Errors">
        <Step>
            <Name>JavaScript-1</Name>
            <Condition>(fault.name Matches "ConcurrentRatelimtViolation") </Condition>
        </Step>
        <Condition>concurrentratelimit.CRL-RateLimitPolicy.failed=true</Condition>
    </FaultRule>
</FaultRules>

Schemas

See our GitHub repository samples for the most recent schemas.

Usage notes

In distributed environments, app traffic may be managed by many replicated API proxies. While each API proxy might be handling just a few connections, collectively, a set of API proxies, running in redundant message processors across multiple datacenters, all of which point to the same backend service, might swamp the capacity of that backend service. Use this policy to limit the aggregate traffic to a manageable number of connections.

When the number of requests being processed by Apigee Edge exceeds the connection limit configured in the policy, Apigee Edge will return the HTTP response code 503: Service Unavailable.

Policy attachment details

The ConcurrentRatelimit policy must be attached as a Step to three Flows on a TargetEndpoint: request, response, and DefaultFaultRule. (A validation error will be thrown at deployment time if the policy is attached to any other Flows, including any ProxyEndpoint Flows.) See also this Apigee Community post

For example to attach a ConcurrentRatelimit policy called ConnectionThrottler to a TargetEndpoint called MyTargetEndpoint, create the following TargetEndpoint configuration:

<TargetEndpoint name="MyTargetEndpoint">
  <DefaultFaultRule name="DefaultFaultRule">
    <Step><Name>ConnectionThrottler</Name></Step>
    <AlwaysEnforce>true</AlwaysEnforce>
  </DefaultFaultRule>
  <PostFlow name="PostFlow">
    <Response>
      <Step><Name>ConnectionThrottler</Name></Step>
    </Response>
  </PostFlow>
  <PreFlow name="PreFlow">
    <Request>
      <Step><Name>ConnectionThrottler</Name></Step>
    </Request>
  </PreFlow>
  <HTTPTargetConnection>
    <URL>http://api.mybackend.service.com</URL>
  </HTTPTargetConnection>
</TargetEndpoint>

Counters are reset when the API proxy is redeployed.

Related topics

Quota policy

Spike Arrest policy

Comparing Quota, Spike Arrest, and Concurrent Rate Limit Policies


Help or comments?