Proxy: Rate Limiting / Timeout #46

Open
opened 2023-08-10 13:35:12 +02:00 by Zerka30 · 9 comments
Zerka30 commented 2023-08-10 13:35:12 +02:00 (Migrated from github.com)

Hello 👋

I am currently working on a CLI for uptime-kuma using this wrapper.

But I have some timeout problems using this.

For example, I tried to implement the command kuma monitor ls :

I tried implementing the kuma monitor ls command to display hostname, monitor id, monitor name, status.

But I got 9/10 timeout errors because I need to perform 1 requests to get all monitors + 1 x Ymonitors to get heartbeat status.

I have about 5 monitors in my Uptime Kuma dev instance. So I've made 6 requests and that already seems too many for the API.

Am I missing something? How can I solve this problem, because it's really frustrating?

Hello :wave: I am currently working on a CLI for uptime-kuma using this wrapper. But I have some timeout problems using this. For example, I tried to implement the command `kuma monitor ls` : I tried implementing the `kuma monitor ls` command to display hostname, monitor id, monitor name, status. But I got 9/10 timeout errors because I need to perform `1 requests to get all monitors + 1 x Ymonitors to get heartbeat status`. I have about 5 monitors in my Uptime Kuma dev instance. So I've made 6 requests and that already seems too many for the API. Am I missing something? How can I solve this problem, because it's really frustrating?
Zerka30 commented 2023-08-10 13:38:18 +02:00 (Migrated from github.com)

By the way, I already tried with a higher timeout. For example 30s and I still have issue

By the way, I already tried with a higher timeout. For example 30s and I still have issue
lucasheld commented 2023-08-10 13:44:08 +02:00 (Migrated from github.com)

Timeouts should not occur. Rate limiting by uptime kuma is possible and can be seen in the uptime kuma logs. As far as I know, it is not the number of requests that matters, but the number of logins. You can prevent this by logging in only once and then making the requests.

Timeouts should not occur. Rate limiting by uptime kuma is possible and can be seen in the uptime kuma logs. As far as I know, it is not the number of requests that matters, but the number of logins. You can prevent this by logging in only once and then making the requests.
Zerka30 commented 2023-08-10 14:31:37 +02:00 (Migrated from github.com)

In relation to earlier and now, I have actually observed this.

But for example I have this :

def ls_monitors(args):
    """
    List all monitors.

    Args:
        args: The arguments passed to the ls command.
    """
    # Connection to Uptime Kuma
    api = UptimeKumaApi(config.UPTIME_KUMA_URL, timeout=0.5)

    # Retry to login, until it works
    success = False
    while not success:
        try:
            api.login(config.UPTIME_KUMA_USERNAME, config.UPTIME_KUMA_PASSWORD)
            success = True
        except Exception:
            success = False

    monitors = []

    try:
        monitors = api.get_monitors()
    except Exception as e:
        api.disconnect()
        print("Error listing monitors:", str(e))
        os._exit(1)

    monitors_data_table = []
    for monitor in monitors:
        try:
            monitor_beats = api.get_monitor_beats(monitor["id"], 1)
            # print(monitor_beats)
        except Exception as e:
            api.disconnect()
            print("Error listing monitors:", str(e))
            os._exit(1)

        if monitor["active"] or args.all:
            data_row = [
                monitor["name"],
                monitor_beats[-1]["status"].name.capitalize(),
            ]

            if not args.short:
                if not args.no_id:
                    data_row.insert(0, monitor["id"])

                if not args.no_type:
                    data_row.append(monitor["type"].name)

                if not args.no_url:
                    data_row.append(value[monitor["type"].name.lower()])

                if not args.no_description:
                    data_row.append(monitor["description"])

            monitors_data_table.append(data_row)
            # print(monitors_data_table)
    headers = ["NAME", "STATUS"]

    if not args.short:
        if not args.no_id:
            headers.insert(0, "ID")
        if not args.no_type:
            headers.append("TYPE")

        if not args.no_url:
            headers.append("HOSTNAME")

        if not args.no_description:
            headers.append("DESCRIPTION")

    print(
        tabulate(
            monitors_data_table,
            headers=headers,
            tablefmt="plain",
        )
    )

    api.disconnect()

And with the following code, the only remaining crash was on api.get_monitor_beats(monitor["id"], 1) sometime that take 0.1s to proceed and something that timeout. Why? Why does it sometimes take 0.1s and sometimes more than 10s?

The response time varies greatly, I don't really understand why.

In relation to earlier and now, I have actually observed this. But for example I have this : ``` def ls_monitors(args): """ List all monitors. Args: args: The arguments passed to the ls command. """ # Connection to Uptime Kuma api = UptimeKumaApi(config.UPTIME_KUMA_URL, timeout=0.5) # Retry to login, until it works success = False while not success: try: api.login(config.UPTIME_KUMA_USERNAME, config.UPTIME_KUMA_PASSWORD) success = True except Exception: success = False monitors = [] try: monitors = api.get_monitors() except Exception as e: api.disconnect() print("Error listing monitors:", str(e)) os._exit(1) monitors_data_table = [] for monitor in monitors: try: monitor_beats = api.get_monitor_beats(monitor["id"], 1) # print(monitor_beats) except Exception as e: api.disconnect() print("Error listing monitors:", str(e)) os._exit(1) if monitor["active"] or args.all: data_row = [ monitor["name"], monitor_beats[-1]["status"].name.capitalize(), ] if not args.short: if not args.no_id: data_row.insert(0, monitor["id"]) if not args.no_type: data_row.append(monitor["type"].name) if not args.no_url: data_row.append(value[monitor["type"].name.lower()]) if not args.no_description: data_row.append(monitor["description"]) monitors_data_table.append(data_row) # print(monitors_data_table) headers = ["NAME", "STATUS"] if not args.short: if not args.no_id: headers.insert(0, "ID") if not args.no_type: headers.append("TYPE") if not args.no_url: headers.append("HOSTNAME") if not args.no_description: headers.append("DESCRIPTION") print( tabulate( monitors_data_table, headers=headers, tablefmt="plain", ) ) api.disconnect() ``` And with the following code, the only remaining crash was on `api.get_monitor_beats(monitor["id"], 1)` sometime that take 0.1s to proceed and something that timeout. Why? Why does it sometimes take 0.1s and sometimes more than 10s? The response time varies greatly, I don't really understand why.
Zerka30 commented 2023-08-10 14:38:36 +02:00 (Migrated from github.com)

Nothing seems wrong in logs :

2023-08-10T14:37:19+02:00 [AUTH] INFO: Login by username + password. IP=X.X.X.X
2023-08-10T14:37:19+02:00 [RATE-LIMIT] INFO: remaining requests: 19
2023-08-10T14:37:19+02:00 [AUTH] INFO: Successfully logged in user KumaCompanion. IP=X.X.X.X
2023-08-10T14:37:21+02:00 [AUTH] INFO: Login by username + password. IP=X.X.X.X
2023-08-10T14:37:21+02:00 [RATE-LIMIT] INFO: remaining requests: 18.594
2023-08-10T14:37:21+02:00 [AUTH] INFO: Successfully logged in user KumaCompanion. IP=X.X.X.X
2023-08-10T14:37:21+02:00 [MONITOR] INFO: Get Monitor Beats: 12 User ID: 1
2023-08-10T14:37:21+02:00 [MONITOR] INFO: Get Monitor Beats: 14 User ID: 1

Perhaps the requests are not reaching the server, because in the server.log above. The following line is missing if it works properly:

2023-08-10T14:36:22+02:00 [MONITOR] INFO: Get Monitor Beats: 15 User ID: 1
2023-08-10T14:36:22+02:00 [MONITOR] INFO: Get Monitor Beats: 16 User ID: 1
2023-08-10T14:36:22+02:00 [MONITOR] INFO: Get Monitor Beats: 17 User ID: 1
Nothing seems wrong in `logs` : ``` 2023-08-10T14:37:19+02:00 [AUTH] INFO: Login by username + password. IP=X.X.X.X 2023-08-10T14:37:19+02:00 [RATE-LIMIT] INFO: remaining requests: 19 2023-08-10T14:37:19+02:00 [AUTH] INFO: Successfully logged in user KumaCompanion. IP=X.X.X.X 2023-08-10T14:37:21+02:00 [AUTH] INFO: Login by username + password. IP=X.X.X.X 2023-08-10T14:37:21+02:00 [RATE-LIMIT] INFO: remaining requests: 18.594 2023-08-10T14:37:21+02:00 [AUTH] INFO: Successfully logged in user KumaCompanion. IP=X.X.X.X 2023-08-10T14:37:21+02:00 [MONITOR] INFO: Get Monitor Beats: 12 User ID: 1 2023-08-10T14:37:21+02:00 [MONITOR] INFO: Get Monitor Beats: 14 User ID: 1 ``` Perhaps the requests are not reaching the server, because in the server.log above. The following line is missing if it works properly: ``` 2023-08-10T14:36:22+02:00 [MONITOR] INFO: Get Monitor Beats: 15 User ID: 1 2023-08-10T14:36:22+02:00 [MONITOR] INFO: Get Monitor Beats: 16 User ID: 1 2023-08-10T14:36:22+02:00 [MONITOR] INFO: Get Monitor Beats: 17 User ID: 1 ```
lucasheld commented 2023-08-12 18:43:50 +02:00 (Migrated from github.com)

I cannot reproduce this problem. With a local uptime kuma instance I made 1000 get_monitor_beats requests with different monitor ids and time periods. All of them took less than 0.02 s.
It would be interesting to know which requests are really sent and whether they arrive. The same applies to the responses. Can you investigate this further?

I cannot reproduce this problem. With a local uptime kuma instance I made 1000 `get_monitor_beats` requests with different monitor ids and time periods. All of them took less than 0.02 s. It would be interesting to know which requests are really sent and whether they arrive. The same applies to the responses. Can you investigate this further?
Zerka30 commented 2023-08-16 10:36:38 +02:00 (Migrated from github.com)

I'll investigate further during the week, and get back to you.

This could be due to my network configuration, although I have my doubts. During your tests, did you connect directly to the IP address on the same local network? No DNS?

I'll investigate further during the week, and get back to you. This could be due to my network configuration, although I have my doubts. During your tests, did you connect directly to the IP address on the same local network? No DNS?
Zerka30 commented 2023-08-18 12:01:06 +02:00 (Migrated from github.com)

Okay, so I tried a few things.

I tried running a series of tests to add all kinds of monitors.

I ran theses test with 4 different configurations :

  • A local instance of Uptime Kuma: I had 8 successes out of 8 and 6s of run time
  • An instance in the same local network: I had 8 successes out of 8 and 8s of run time
  • An instance on the Internet linked to a domain name: I had 8 successes out of 8 and 6s of run time
  • An instance on the Internet behind a reverse proxy (HAProxy): I had 8 successes out of 8 (with 8 additional attempts due to the crash) and 1min34s run time.

In conclusion, the limitation is surely not due to Uptime Kuma, but to my reverse proxy configuration. I propose to leave this issue open, while I solve the problem on my side, in order to help people who could possibly encounter the same problem as me.

Okay, so I tried a few things. I tried running a series of tests to add all kinds of monitors. I ran theses test with 4 different configurations : - A local instance of Uptime Kuma: I had 8 successes out of 8 and 6s of run time - An instance in the same local network: I had 8 successes out of 8 and 8s of run time - An instance on the Internet linked to a domain name: I had 8 successes out of 8 and 6s of run time - An instance on the Internet behind a reverse proxy (HAProxy): I had 8 successes out of 8 (with 8 additional attempts due to the crash) and 1min34s run time. In conclusion, the limitation is surely not due to Uptime Kuma, but to my reverse proxy configuration. I propose to leave this issue open, while I solve the problem on my side, in order to help people who could possibly encounter the same problem as me.
ScrumpyJack commented 2023-10-09 12:09:36 +02:00 (Migrated from github.com)

I'm seeing the same issue behind caddy. Will investigate as well.

I'm seeing the same issue behind caddy. Will investigate as well.
emelarnz commented 2024-01-03 21:54:20 +01:00 (Migrated from github.com)

Not entirely sure if this is the same issue, but I was getting random "UptimeKumaException: monitor does not exist" errors without a proxy in the mix (just calling directly to the port exposed by Docker). Thanks @lucasheld - your comment above about rate limiting being based on the number of logins was the hint I needed. I've changed the logic so the existing API session is re-used and the errors have gone away. (My use case is adding more complex alerting logic external to Kuma e.g. a group with an allowable percentage of monitors being down).

Not entirely sure if this is the same issue, but I was getting random "UptimeKumaException: monitor does not exist" errors without a proxy in the mix (just calling directly to the port exposed by Docker). Thanks @lucasheld - your comment above about rate limiting being based on the number of logins was the hint I needed. I've changed the logic so the existing API session is re-used and the errors have gone away. (My use case is adding more complex alerting logic external to Kuma e.g. a group with an allowable percentage of monitors being down).
Sign in to join this conversation.
No description provided.