Skip to content

Making API calls at least x5 faster

The TCP/DNS latency problem

Are you making thousands of back-end requests per hour and running into latency or timeout issues ?

Most back-end scripts and SDKs use short lived HTTP/1.1 requests to RPC node. As most Linux / Docker / K8S setups do not cache external DNS responses this slows down each API request by adding :

  • extra DNS resolutions for A and AAAA records
  • extra TCP handshake

Using HTTP/2 or HTTP/1.1 connection pooling eliminates the extra steps and allows you to make thousands of requests per second easily. Custom REST clients can be easily modified to use connection pooling.

Local API proxy

Consider running local reverse proxy to speed up band-end clients using Algorand SDK.

Proxy will speed things up by:

  • Making DNS resolution only once for thousands of API calls
  • Keeping TCP connections alive and reusing them

Proxy will also increase availability by:

  • Proactively monitoring Cloudflare and non-cloudflare Nodely endpoints
  • Falling back to closest non-cloudflare region in case of global CF outage

Huge speedup, less CPU usage, faster responses, no DNS timeouts.

Caddy reverse proxy using Docker

Caddy config file

Create a file named Caddyfile with the following contents:

# rev 1.0 @ 2025-08-23 - initial release
(log) {
log {
output stdout
format json
}
}
(nodely-rev) {
health_uri /health
health_interval 5s
health_timeout 2s
health_status 200
lb_try_duration 1000ms
lb_retries 3
lb_retry_match {
method "GET" "HEAD" "OPTIONS" "POST"
}
lb_policy first
lb_try_interval 50ms
transport http {
dial_timeout 1500ms
response_header_timeout 5500ms
keepalive 30s
max_conns_per_host 50
}
}
# Node API endpoint (http://localhost:8801)
:8801 {
bind 127.0.0.1
import log
reverse_proxy /* {
to https://mainnet-api.4160.nodely.io https://mainnet-api.algonode.network
header_up Host mainnet-api.4160.nodely.io
import nodely-rev
}
}
# Indexer API endpoint (http://localhost:8802)
:8802 {
bind 127.0.0.1
import log
reverse_proxy /* {
to https://mainnet-idx.4160.nodely.io https://mainnet-idx.algonode.network
header_up Host mainnet-idx.4160.nodely.io
import nodely-rev
}
}

Run the rev-proxy using Docker compose

Create a docker-compose.yaml file with the following contents:

services:
nodelyproxy:
container_name: nodelyproxy
image: caddy
volumes:
- type: bind
source: ./Caddyfile
target: /etc/caddy/Caddyfile
restart: always
deploy:
resources:
limits:
memory: "1G"
environment:
- 'GOMEMLIMIT=512MiB'
- 'GOMAXPROCS=2'
network_mode: host

You can modify the compose file to run in a separate docker network and map the ports in some other way. Running docker compose up -d will start rev-proxy on ports 8801 and 8802.

Using the rev-proxy

You can than point your client code to use

  • http://localhost:8801 for Algod (node) API
  • http://localhost:8802 for Indexer API

curl http://localhost:8802/health

Hints:

  • You can remove the “bind 127.0.0.1” directive to make the proxy listen on your public/external IP address
  • You can include this service as a sidecart to your K8S deployments or docker-compose setups

Docker compose sidecart example

services:
nodelyproxy:
container_name: nodelyproxy
image: caddy
volumes:
- type: bind
source: ./Caddyfile
target: /etc/caddy/Caddyfile
restart: always
deploy:
resources:
limits:
memory: "1G"
environment:
- 'GOMEMLIMIT=512MiB'
- 'GOMAXPROCS=2'
networks:
- example_network
client:
image: curlimages/curl:latest
container_name: client_curl
networks:
- example_network
depends_on:
- nodelyproxy
command: >
sh -c "
while true; do
echo '--- Making indexer request at: ' && date &&
curl -s http://nodelyproxy:8802/health &&
echo ''
sleep 1
done
"
networks:
example_network: