Hi,
I am attempting to run the API Gateway Kong as a Web Service with a custom Dockerfile. It actually appears to be deploying and on requests to my service’s domain I get an actual kong response (basically the ‘no routes found’ message), but for any route that I actually define I get a 502 and EOF response.
Locally, with my configuration I can correctly access my routes, but not in Render. I notice that it appears Render is using Envoy and I’m wondering if there is any known conflict between Kong & Envoy that may be stopping Kong from accessing defined routes (e.g., my service.onrender.com/demo).
Another weird thing is I actually got 1 request to be routed correctly out of 100+ tries.
edit: Ok, I’m using http://httpbin.org/anything as my upstream service and if I use http instead of https, I get about 1 correct response out of about 20 attempts.
edit 2: A pattern seems to emerge, occasionally I will get a response like this: read tcp 127.0.0.1:51532->127.0.0.1:8000: read: connection reset by peer
, if I reissue the request soon after getting that, my request is routed correctly.
edit 3: Not sure the reset by peer has anything to do with it. On failures I only see convoy headers like x-envoy-upstream-service-time: 729
, but on successful requests, I see convoy and kong headers:
via: kong/2.7.1
x-envoy-upstream-service-time: 551
x-kong-proxy-latency: 334
x-kong-upstream-latency: 146
I can see the successful requests in the Kong logs too in my web service dashboard. I do not see any logs for the unsuccessful logs.
Anyone know if the Convoy router that Render is using blocks certain types of requests or needs some configuration if we are setting up our own API Gateway?
Anything thing I noticed is the response headers are different when I reach my Kong instance vs when I don’t.
On the times the request fails to reach Kong I see this header:
cf-cache-status: MISS
On the times the request succeeds in reaching Kong I see this header:
cf-cache-status: BYPASS
Is there a cache setting I can disable to bypass 100% of the time?
I’m still struggling with this issue as well. Looks like there was a large update last week and maybe Render is using Kong now at the edge? Is it stripping off all uris from requests?
Hello @nathangilbert,
I was able to replicate the behaviour you were seeing and was able to fix it by changing two things.
- Kong is using more memory than the 512MB in the
Starter
Plan causing it to return 502 errors most of the time. After upgrading to the Starter Plus
Plan, I was able to consistently get 404 errors.
- The way the Kong yaml is setup the paths you define are which paths you want to direct to specific services. I used the following yaml it it worked for me and was able to consistently get 200s.
_format_version: "2.1"
_transform: true
services:
- name: user-service
url: http://test-flask-pzxp:5000
routes:
- name: user-routes
paths:
- /
Can you try both of those things and see if it works?
Thanks! After bumping up to “Starter Plus” my Kong logs started looking a lot better and as you said, the experience became more deterministic. I think I do have a problem with my Kong configs as well which I can work out now that it’s behaving more logically. Thanks for your time – this helped me break through this issue.