API Gateway recently launched regional endpoints, a deceivingly simple feature that has important implications:
- lower latency for clients located in the same AWS region (i.e. running in EC2 or Lambda)
- ability to manage your own CloudFront distribution or WAF for your API
- ability to manage DNS routing for your custom domain name
In my opinion, the biggest win here is the ability to integrate Route53 DNS routing with your REST APIs. If you replicate your APIs to multiple regions (using OpenAPI import, for example), you can take advantage of powerful Route53 features such as latency-based routing, regional failover, and blue green deployments.
There are distinct advantages for both options. Here’s my personal take on when to use each:
When to use regional endpoints:
- your clients are predominantly located in the same AWS regions (i.e. running in EC2 or Lambda)
- you want to manage your own CloudFront distribution and use CloudFront features such as custom routing rules, edge caching, WAF, Lambda@Edge, etc
- you want to take advantage of DNS routing for your custom domain name
When to use edge-optimized endpoints:
- You have geographically distributed clients
- You don’t want to pay for and manage your own CloudFront distribution
A note on latency benchmarking:
A common pattern I’ve seen is for developers to conduct performance tests against API Gateway with traffic originating from a single EC2 region, or worse, from a single development machine. These types of tests will likely produce better latency results using regional endpoints. However, keep in mind that if you have geographically distributed clients, synthetic tests will not represent the client experience. The best way to truly measure this is to track client-side latency metrics from your API clients.
Congrats to the API Gateway team on a very important release.