Proxy: Rate Limiting / Timeout #46
Labels
No labels
bug
documentation
duplicate
enhancement
good first issue
help wanted
invalid
question
wontfix
bug
duplicate
enhancement
help wanted
invalid
question
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: DGNum/uptime-kuma-api#46
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hello 👋
I am currently working on a CLI for uptime-kuma using this wrapper.
But I have some timeout problems using this.
For example, I tried to implement the command
kuma monitor ls
:I tried implementing the
kuma monitor ls
command to display hostname, monitor id, monitor name, status.But I got 9/10 timeout errors because I need to perform
1 requests to get all monitors + 1 x Ymonitors to get heartbeat status
.I have about 5 monitors in my Uptime Kuma dev instance. So I've made 6 requests and that already seems too many for the API.
Am I missing something? How can I solve this problem, because it's really frustrating?
By the way, I already tried with a higher timeout. For example 30s and I still have issue
Timeouts should not occur. Rate limiting by uptime kuma is possible and can be seen in the uptime kuma logs. As far as I know, it is not the number of requests that matters, but the number of logins. You can prevent this by logging in only once and then making the requests.
In relation to earlier and now, I have actually observed this.
But for example I have this :
And with the following code, the only remaining crash was on
api.get_monitor_beats(monitor["id"], 1)
sometime that take 0.1s to proceed and something that timeout. Why? Why does it sometimes take 0.1s and sometimes more than 10s?The response time varies greatly, I don't really understand why.
Nothing seems wrong in
logs
:Perhaps the requests are not reaching the server, because in the server.log above. The following line is missing if it works properly:
I cannot reproduce this problem. With a local uptime kuma instance I made 1000
get_monitor_beats
requests with different monitor ids and time periods. All of them took less than 0.02 s.It would be interesting to know which requests are really sent and whether they arrive. The same applies to the responses. Can you investigate this further?
I'll investigate further during the week, and get back to you.
This could be due to my network configuration, although I have my doubts. During your tests, did you connect directly to the IP address on the same local network? No DNS?
Okay, so I tried a few things.
I tried running a series of tests to add all kinds of monitors.
I ran theses test with 4 different configurations :
In conclusion, the limitation is surely not due to Uptime Kuma, but to my reverse proxy configuration. I propose to leave this issue open, while I solve the problem on my side, in order to help people who could possibly encounter the same problem as me.
I'm seeing the same issue behind caddy. Will investigate as well.
Not entirely sure if this is the same issue, but I was getting random "UptimeKumaException: monitor does not exist" errors without a proxy in the mix (just calling directly to the port exposed by Docker). Thanks @lucasheld - your comment above about rate limiting being based on the number of logins was the hint I needed. I've changed the logic so the existing API session is re-used and the errors have gone away. (My use case is adding more complex alerting logic external to Kuma e.g. a group with an allowable percentage of monitors being down).