This guide demonstrates how to configure local rate limiting for HTTP requests destined to a target host that is a part of an OSM managed service mesh.
Prerequisites
- Kubernetes cluster running Kubernetes v1.22.9 or greater.
- Have OSM installed.
- Have
kubectl
available to interact with the API server. - Have
osm
CLI available for managing the service mesh. - OSM version >= v1.2.0.
Demo
The following demo shows a client sending HTTP requests to the fortio
service. We will see the impact of applying local HTTP rate limiting policies targeting the fortio
service to control the throughput of requests destined to the service backend.
-
For simplicity, enable permissive traffic policy mode so that explicit SMI traffic access policies are not required for application connectivity within the mesh.
-
Deploy the
fortio
HTTP service in thedemo
namespace after enrolling its namespace to the mesh. Thefortio
HTTP service runs on port8080
.Confirm the
fortio
service pod is up and running. -
Deploy the
fortio-client
app in thedemo
namespace. We will use this client to send TCP traffic to thefortio TCP echo
service deployed previously.Confirm the
fortio-client
pod is up and running. -
Confirm the
fortio-client
app is able to successfully make HTTP requests to thefortio
HTTP service on port8080
. We call thefortio
service with3
concurrent connections (-c 3
) and send10
requests (-n 10
).As seen above, all the HTTP requests from the
fortio-client
pod succeeded.Code 200 : 10 (100.0 %)
-
Next, apply a local rate limiting policy to rate limit HTTP requests at the virtual host level to
3 requests per minute
.Confirm no HTTP requests have been rate limited yet by examining the stats on the
fortio
backend pod. -
Confirm HTTP requests are rate limited.
As seen above, only
3
out of10
HTTP requests succeeded, while the remaining7
requests were rate limited with a429 (Too Many Requests)
response as per the rate limiting policy.Code 200 : 3 (30.0 %) Code 429 : 7 (70.0 %)
Examine the stats to further confirm this.
-
Next, let’s update our rate limiting policy to allow a burst of requests. Bursts allow a given number of requests over the baseline rate of 3 requests per minute defined by our rate limiting policy.
-
Confirm the burst capability allows a burst of requests within a small window of time.
As seen above, all HTTP requests succeeded as we allowed a burst of 10 requests with our rate limiting policy.
Code 200 : 10 (100.0 %)
Further, examine the stats to confirm the burst allows additional requests to go through. The number of requests rate limited hasn’t increased since our previous rate limit test before we configured the burst setting.
-
Next, let’s configure the rate limting policy for a specific HTTP route allowed on the upstream service.
Note: Since we are using permissive traffic policy mode in the demo, an HTTP route with a wildcard path regex
.*
is allowed on the upstream backend, so we will configure a rate limiting policy for this route. However, when using SMI policies in the mesh, paths corresponding to matching allowed SMI HTTP routing rules can be configured. -
Confirm HTTP requests are rate limited at a per-route level.
As seen above, only
3
out of10
HTTP requests succeeded, while the remaining7
requests were rate limited as per the rate limiting policy.Code 200 : 3 (30.0 %) Code 429 : 7 (70.0 %)
Examine the stats to further confirm this.
7
additional requests have been rate limited after configuring HTTP route level rate limiting since our previous test, indicated by the total of14
HTTP requests rate limited in the stats.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.