AlgoNode FAQ
Can our project just use the free endpoints?
Are the endpoints really free? (Fair Usage Policy)
Sure they are, as long as you are happy with our Fair Usage Policy:
- 50 requests per source IP address (per session)
- 5GB worth of responses per project per day (starting 1st of Jan 2024)
- 1000 requests per second per project
- a soft limit of 1 million requests monthly for no attribution projects (see below)
- hard limit of 10 million requests monthly per project.
The soft limit does not apply if you put powered by Algonode.io in the footer of your our dApp. Soft & Hard Limits do not apply to our commercial packages.
Limits , limits, limits
We reserve the right to return error code 429 or even blacklist back-end requests if they are too heavy.
How to avoid getting my server based or serverless solution blacklisted ?
- do not query for the same immutable data multiple times - cache the results and ask for fresh data (use min-round param)
- do not make the same identical query multiple times a second - wait for a new round
- do not make hundreds of identical requests from multiple servers/ip addresses in the same round
- use exponential backoff - wait *2 longer then last time before making new request if the last one got >= 404 error (404,429,50[1-3])
- use rate limiter on your end - get yourself a rate limiting http client or library and be nice to your free API provider
Does backend traffic count towards soft/hard limits ?
Nope, not at the moment. We route backend traffic towards more powerful regions. This increases latency for backend requests but makes room for low-latency frontend traffic.
Do we need to contact AlgoNode before we go live ?
Nope. While not strictly needed it just pays of to give us a heads-up. We’ll gladly help you (free of charge) to make a successful transition to the mainnet. We can monitor the public dashboard during the launch so that you can focus on your own backend. If you are a huge success and our limits negatively impact the experience we can issue you temporary tokens that boost the limits tenfold.
What kind of projects use the free endpoints ?
We have users that are :
- NFT teams
- DeFi
- GameFi
- Asset/NFT management
- R&D individuals
AlgoNode helps projects of all sizes - from 0 total users to thousands of daily active users.
There are paid options - do we need them?
Projects of all sizes use our free endpoints. The primary goal of AlgoNode is to help teams run their own API infrastructure .
Until that happens teams decide on the paid packages when :
- they need a guarantee (SLA) and a real phone number to call in case something breaks.
- they would like to hide the API call logs from the public dashboard
- they need dedicated resources that have known performance characteristics
- they need endpoints located physically next to their back-end
But mostly because AlgoNode has helped them become profitable and now they can afford it 🎉
Nothing is free really…
Free endpoints exists for the following main reasons:
- The AlgoNode team is together since 2006 but is very new to the blockchain world. We want to learn it fast and help community in the process.
We’ve decided to focus exclusively on Algorand but do not wait for it to become number #1. We help dev teams to make sure it happens fast - in 50ms or faster ;) - We need a place to test our crazy ideas (API patches, infra config, DB backends)
We plan for return on this investment - without changing the rules.
Custom endpoints ?
Are there endpoints with extra functionality ?
There are, but only for commercial customers. We also charge them an arm a and a leg so they think twice before ordering. Our team hates vendor lock-in and one way to prevent this is to focus on vanilla API. Our API might be faster and less resource intensive but all the patches and designs are made public. Web 3.0 is about decentralization - dApps should be able to switch between API providers and private nodes. Custom endpoints break that and hinder decentralization.
But we need custom endpoints
Custom endpoints are 128 USD/mo each. Still interested ? Drop us an email
Any example of custom endpoints ?
- Endpoints that are depreciated elsewhere
- Streaming endpoints to avoid polling (new blocks, filtered TXNs)
- Analytical endpoints
- FTLBlock™ endpoints (for arbitrage bots)
Who are your investors ?
Who is funding all this ?
We have no investors. The team pays all the bills.
Are you going to disappear soon ?
Current infrastructure is secured in long term contracts so no danger here. AlgoNode got a developer award from The Algorand foundation for our contributions but the business model does not depend on grants. Any extra funding that we might get will just speed up the deployment of our crazy ideas and result in more open source tools.
Algorand FAQ
Relay/Archive/Catchup/Participation node
What is a Catchup node ?
Algorand allows every one to run a node that is in sync with blockchain and has full account state data but not full history - just recent 1000 (or 320) blocks.
When one runs a node with default config it will start building the account state starting from block ZERO and then will delete the history keeping only the recent blocks.
There is an operation called catchup
that tells the node to download a recent snapshot of the accounts. This allows you to skip the 4 week full sync :)
To do a fast catchup on a mainnet node just issue this command:
goal node catchup $(curl -s https://algorand-catchpoints.s3.us-east-2.amazonaws.com/channel/mainnet/latest.catchpoint)
The snapshot it just a hash of the state at particular block and the actual data will come from a random relay node.
The catchup node is great for getting latest block data and posting new transactions to the networks. Access to the full block history requires an archival
node. Searching by transaction or using advanced filters requires running an indexer
Catchup nodes are very light on CPU, and only ~12GB of space (as of Apr 2022).
What is a participation node ?
So the catchup node does not participate in voting process. For that one needs to generate participation keys for an account that hold some amount of Algo.
Once the keys are onlined
on a catchup node it becomes a participation node and is given a chance to vote proportionally to the amount of Algo in the account.
Participation keys need to be renewed after a time - so this is not maintenance free.
Participation node can be even run on Raspberry PI 4. Not sure this will hold for 10k TXN/s upgrade.
Here are some links on the subject:
- https://www.reddit.com/r/AlgorandOfficial/comments/p9dv17/guide_algorand_participation_node_using_a/
- https://betterprogramming.pub/running-an-algorand-node-in-the-cloud-a3e320f4e864
- https://mcgilldevtech.com/2021/05/run-an-algorand-participation-node/
- https://algod.network/algorand-generate-participation-key-64d57c566e67
You can monitor your participation with https://app.metrika.co/
What is an Archival node ?
When you set Archival: true
in config.json
you get an archival node (after 2-4 weeks). This mode just does not delete old blocks.
You cannot fast track the process with a “full catchup” - no such thing exists. If you try doing a catchup on an archival node it skip downloading the full history.
The syncing process is VERY I/O intensive. Fast SSD or NVMe disk is required. But even with SSD disk the node might never sync if
- your SSD is connected via USB instead of SATA or PCI
- your SSD/NVMe has no heat sink - it overheats and slows down
- you are running on a virtual machine that adds to an I/O latency (KVM, VM on a NAS, Cloud without accelerated I/O)
- you are running on a “cloud volume” that has I/O limits or high latency
Run iostat -x 1
and iostat -x 30
to confirm that your disk has >80% utilization and is slowing down the sync.
Also ioping /dev/yourdevide
should give you 50 to 200 microsecond (0.05 to 0.2 millisecond) for the full sync to work.
Also it takes 1TB of disk space for the full archive. See this handy site for realtime space requirements
You can interrupt and resume the sync process at any time - no worries.
You can confirm that the node is sync by running goal node status |grep Sync
. Your node is synced if the time says 0.0s
Algod Archival NODE will only provide Node API not Indexer API. If you need Indexer API you need BOTH - an archival node and an indexer with PostgreSQL server running close to one another (on the same machine even)
But I cannot wait 4 weeks, this is an emergency!
Is that case you can download an untrusted snapshot from our archive
Just read the readme file and install PIXZ utility first.
But I only have a VM or slower SSD !
Same deal as above. This might work for you as the node will only need to sync the last day which should take (hopefully) less than a day.
Just read the readme file and install PIXZ utility first.
What is a Relay node ?
Relay nodes are just relays - they pass blocks to and from other nodes. They do not participate in the consensus but are vital to the exchange of blocks. Catchup and Archival nodes get new blocks from Relay nodes. There is a default, permissioned list of relay nodes that are known to every other node. Running a relay node does not put you on the public list - one needs to apply for a spot by contacting The Algorand Foundation.
Catchup/Archival nodes connect to at least 4 random relay nodes. Your archival node syncing process might suffer if you connect to nodes that are far away.
If you want to make sure that you connect only to relays that are close to you than you can set DNSBootstrapID
parameter in node’s config.json
to one of the values from the table.
Connecting to closest nodes will not make your node see new blocks faster - it will affect only the syncing process.
This list is not endorsed by the Foundation.
The list is managed by Algonode and is based on sampling of relay response times from all Vultr datacenters.
The returned list of relays is a subset of official relay list maintained by The Algorand Foundation. Whenever they decommission a node it will automatically no longer resolve in the list managed by Algonode - as the entries are just pointers to the official entries.
Sampling script available here
Alternative relay catalog
Pick DNSBootstrapID that is closest to you.
Country | City | DNSBootstrapID |
---|---|---|
AU | Melbourne | melbourne.au.r6a.algorand-mainnet.algonode.global. |
CA | Toronto | toronto.ca.r6a.algorand-mainnet.algonode.global. |
DE | Frankfurt | de.r6a.algorand-mainnet.algonode.global. |
FR | Paris | fr.r6a.algorand-mainnet.algonode.global. |
GB | London | gb.r6a.algorand-mainnet.algonode.global. |
IN | Mumbai | in.r6a.algorand-mainnet.algonode.global. |
JP | Tokyo | jp.r6a.algorand-mainnet.algonode.global. |
KR | Seoul | kr.r6a.algorand-mainnet.algonode.global. |
MX | Mexico | mx.r6a.algorand-mainnet.algonode.global. |
NL | Amsterdam | nl.r6a.algorand-mainnet.algonode.global. |
PL | Warsaw | pl.r6a.algorand-mainnet.algonode.global. |
SE | Stockholm | se.r6a.algorand-mainnet.algonode.global. |
SG | Singapore | sg.r6a.algorand-mainnet.algonode.global. |
US | Atlanta | atlanta.us.r6a.algorand-mainnet.algonode.global. |
US | Chicago | chicago.us.r6a.algorand-mainnet.algonode.global. |
US | Dallas | dallas.us.r6a.algorand-mainnet.algonode.global. |
US | Honolulu | honolulu.us.r6a.algorand-mainnet.algonode.global. |
US | Los Angeles | losangeles.us.r6a.algorand-mainnet.algonode.global. |
US | Miami | miami.us.r6a.algorand-mainnet.algonode.global. |
US | New Jersey | newjersey.us.r6a.algorand-mainnet.algonode.global. |
US | Seattle | seattle.us.r6a.algorand-mainnet.algonode.global. |
Example:
{
"DNSBootstrapID": "gb.r6a.algorand-mainnet.algonode.global."
}
Also you might want to have a larger pool than 6 relays. You can substitute r6a with r8a or r12a to work with 8 or 12 nodes closest to you.
DNS part | Relays |
---|---|
r6a | 6 |
r8a | 8 |
r12a | 12 |
Node syncing issues
Read the section on archival node or ask on our discord channel
Indexer
Do I need the Indexer?
Node API is best for accessing a state of a particular single object - an account, an app, a block or a recently posted transaction .
So most dApps do need Indexer API as it provides endpoints with search and filtering capabilities that return multiple matching accounts, transactions etc.
Indexer requires a full archival node close by and even faster disk to fully sync.
Here are some indexer related links:
Hmm, the Internet seems to be missing an indexer setup how-to.
I think we need to create one + develop a virtual node that would allow for fast sync from algonode.io free endpoints without the need for local archival node.
Do I get paid for running ….. node ?
Nope, you just get this warm fuzzy feeling when you participate in the decentralization.