Load balancer behavior for horizontally scaled services

I’d like to deploy two instances of a service that performs best when requests from the same client are handled by the same instance.

How do Render’s load balancers distribute traffic? Is it based strictly on the load the instances are currently under, or do inbound requests have an affinity for instances that have previously handled requests from that client?

it’s entirely random, we do not (yet) have sticky sessions.

Thanks for clarifying, John.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.