#compatibility with twilight http proxy
16 messages · Page 1 of 1 (latest)
@viral quarry @limber escarp @wheat frost
(proxy-discussions via dms are generally not very nice)
heh sorry
no worries, didn't think it'd become as much of a back and forth as it's looking like becoming 
but yeah removing rate limit headers can actually do more harm then good for some libs when you are proxying
a few libs i tested in the past just straight up crash when they get a 429 with no rate limit info from discord at all. as they consider it an impossibility
by proxying and returning all headers (except some http2 headers, cause similarly returning http2 headers on a seperate, pottententially http 1, connection can really confuse libs)
this also allows downstream clients to be in full control and decide if they want to also handle rate limits or not. an example usecase would be aborting and giving the user an error when you know the rate limit is extremely long (like the 24h one on roles just to name one)
Doesn't http-proxy "wait" until you can do the request without hitting a 429? Or does it just reject?
it waits, however i do plan on having a max timeout config in the upstream twilight rate limit lib (used by the proxy as well as the twilight http lib to interact with discord) to reject with an error if the timeout would be too long
the http proxy could catch that error then and assemble discord-like rate limit headers to return
but it's kinda why i'm looking at some popular libs like d.js to see what is and isn't compatible. make sure it's actually working for all
originally the main goal was just the twilight http crate itself. but it turns out to be useful to people far outside that
compatibility with twilight http proxy
So here's my thoughts:
1: we're switching to /rest for v14, which will eventually have the ability to share ratelimits cross clients somehow, so maybe that solves the use case you are trying to fill
2: regardless of if it solves the use case, with the switch to /rest, as long as you emulate the request structure, you will be able to swap out the entire module very easily by just client.rest = new whateverModule()
3: we should be able to add an option to /rest very easily that disables built in ratelimiting very easily, my question here is, should it still queue requests, or should the proxy be handling queueing too?