Ankr’s RPC Performance Advantage: Load Balancing
Kevin Dwyer
April 6, 2023
8 min read
Ankr’s Web3 API users are often curious about the inner workings of our infrastructure and our methodology in serving request calls to 30+ blockchains. This article is designed to explain one of the performance aspects of the service that users will benefit from – Ankr’s unique approach to node load balancing. This article will take a look at what it is, how it works, and how it’s different.
What Is a Load Balancer?
Load balancers are networking tools used to evenly distribute (balance) incoming network traffic among multiple servers, or in Ankr’s case, blockchain nodes. They are very useful in optimizing resource utilization, increasing application availability, and improving overall system performance. Ankr’s load balancers take incoming RPC requests and direct them to the best-suited node that can serve the request in the least amount of time possible.
Ankr’s Load Balancing Architecture for Our Web3 API Service
One of Ankr’s defining advantages is the fact that we don't have a single centralized RPC gateway or “Load Balancer.” One of our fundamental distinctions is that Ankr employs an entire network of geo-distributed load balancers instead of a single instance for enhanced performance. This approach reduces the time needed for the request to reach any load balancer as they will be closer to the client. In other words, it reduces overall latency by way of decreasing transmission times. For example, even if data were traveling directly at the speed of light during the length of its journey (improbable), a request from a client in Newcastle, Australia, would reach a load balancer in Sydney much faster than one in Virginia in the US (approximately 112x faster).
Ankr’s load balancing methods:
- Directs RPC requests closer to users, to nodes that are at blockheight
- Directs requests requiring archive functionality to archive nodes
- Delivers lower-latency results
- Powers faster blockchain interactions
Load Balancing To Providing Global Quality and Performance
When a client initiates an RPC request, the request is first sent to the load balancer, which then routes the request to the appropriate node based on the load balancing algorithm and node availability. In this case, the load balancer is activated at the beginning of RPC request processing.
As Ankr has nodes and load balancers running in 40+ data centers in more global regions than any provider, we have the unique ability to serve RPC requests closer to their point of origin. The idea behind geo-based routing is to minimize network latency and improve the overall performance of the system by reducing the time it takes for data to travel between the client and the server.
An Example of RPC Request Routing With Blockchain Interactions
To understand the concept better, we can take a look at an example of how a common scenario might work using Ankr’s RPC infrastructure:
Transaction Initiation: A person in the Philippines named Nicole wants to make a trade on a decentralized exchange based in Manila. For this example, we will assume it’s an Ethereum DEX. Nicole finds a pool on the DEX that she wants to use to trade her ETH for ANKR tokens (ERC-20). The transaction details look favorable, so she submits the trade order and confirms it with her MetaMask wallet. This will initiate a JSON-RPC request on the backend using the eth_sendTransaction request method. If the DEX is using one of Ankr’s Ethereum RPC endpoints in their code, such as https://rpc.ankr.com/eth, the request will go through Ankr’s load balancer in the following process to find a node to serve it quickly:
-
Node Health: As the load balancer receives the request, it will immediately begin an analysis on the best node to send it to. The load balancer scores nodes on a scale of 1-10 based on if they are synced with the blockchain or behind in blocks. If the node is behind, the delay is critical, or the node does not respond to the blockheight queries, the load balancer will not assign RPC traffic for that node to serve.
-
Geographic Location: The load balancer will select the quickest node to respond to a request. To do so, the load balancer instance nearest to the user regularly sends a standard request to each node and measures their response times to find the fastest one. Usually, this node will be the closest in geographic proximity to a load balancer instance. Because of this, the user will always receive the fastest response without the protocol ever needing to know their exact geographic location – this ensures users always remain private and safe. Learn more about Ankr’s Privacy Policy and our dedication to the highest standard of Personal Information protection.
-
Workload: As its name suggests, the load balancer effectively manages traffic to ensure that the traffic is spread (balanced) evenly across all active nodes and that none are overwhelmed with requests. To help share the load, Ankr is beginning to expand the list of independent node providers serving traffic alongside Ankr-run nodes.
This same process will occur for the other blockchain interactions involved in Nicole’s DEX trade below:
Verification: The DEX smart contract will then need to make several RPC requests to verify that Nicole and the other party have sufficient funds to complete the trade. Specifically, it would need to use the eth_getBalance method to check the balance of each user's Ethereum address, and the eth_call method to simulate the execution of the trade and ensure that it will succeed. Once the verification is complete, the smart contract would execute the trade internally without the need for additional RPC requests.
Confirmation & Settlement: The trade confirmation would be recorded on the blockchain automatically, without the need for additional RPC requests. Once the trade is confirmed, the DEX's smart contract would need to make an RPC request to transfer the ANKR tokens received from the trade to Nicole’s wallet. This would likely use the eth_sendTransaction method to create a transaction that transfers the ANKR from the smart contract to Nicole's address.
Failover Ensures Requests Are Served Rapidly Regardless of Errors
During the example above, if there was any time that a request reached a node that was experiencing an outage or an error, the load balancer will automatically re-route the request to another node.
Every failover response will depend on the error. Depending on what class the error falls under, a request will either be sent to another node, or it could be nullified if there is incomplete information (transaction hash unknown, block doesn’t exist). In the first case, if a node isn’t performing well, then the request is automatically sent to another node. Likewise, If a node answers incorrectly, the protocol moves to a second node, and if the second answers incorrectly, it will quickly move to a third.
Ankr Network’s monitoring system observes the performance of all nodes with very high regularity. If something goes wrong with a node, it will be disconnected from the load balancer, and it will no longer be regarded as a candidate to serve users’ requests until it has been verified to be working via monitoring.
Enhancing RPC Performance With Cloud Partnerships
In addition to our vast bare-metal infrastructure, Ankr has formed partnerships with cloud providers like Microsoft Azure and Tencent Cloud to expand the reach of our RPC services and decrease latency across the board. As load balancers can route traffic to both cloud and bare-metal servers, they can leverage the unique benefits of each with even more locations closer to users. Our cloud provider partnerships ensure we can scale our available node resources to provide greater speed and reliability while continuing to build out bare-metal infrastructure in independent data centers worldwide.
Increasing Speeds To Power Better Web3 Experiences
Put together with Ankr’s vast geographic distribution of blockchain nodes themselves, we are reducing request processing times (the time request goes from the load balancer to the node and back), making the node-to-load-balancer coupling a powerful solution to increase infrastructure effectiveness and reduce querying latency. This is all designed to provide the user with the same predictable top-quality services regardless of the user's location.
From a performance perspective, we can improve the speed of our RPC Service infrastructure in three core areas:
-
High-performance nodes: Ankr’s monitoring system checks the nodes' performance with very high regularity. If something goes wrong with a node, it will be disconnected from a load balancer. Certainly, it's not performed instantly, but the speed is rather high. In other words, if anything goes wrong with a node, it will be disconnected from the load balancer, and the load balancer will no longer regard that node as a candidate for sending user requests.
-
Load balancing: The load balancing algorithm uses a scoring system to determine the best possible node to serve an RPC request at any given time. A load balancer selects the quickest node to answer. To do so, each load balancer instance regularly sends a standard request to each node and measures their answer times to find the fastest one. Therefore, we can say that each instance of a load balancer knows the quickest nodes to serve the requests for each of the blockchains.
-
Info caching: Ankr’s system caches the node responses for common queries to provide the fastest possible responses. The information is stored according to response type, so it is ready for rapid delivery to clients, serving as an additional way to reduce request processing times.
As we’ve already covered load balancing here, we’ll dive into our global node infrastructure, info caching, and other aspects of performance that help us deliver the fastest speeds for our customers in posts coming soon – follow Ankr on Twitter to ensure you see them!
Join the Conversation on Ankr’s Channels
Twitter | Telegram Announcements | Telegram English Chat | Help Desk | Discord | YouTube | LinkedIn | Instagram | Ankr Staking