Thanks a lot for your inputs. In the meantime I disabled zram and added a 2 GiB swapfile.
What I don't understand: how can the system have not enough memory problems, while it's showing ~ 2 GiB cached/buffer? I'd expect it to free some of that and be fine again... 2 GiB is about 50% of the physical RAM of 4 GiB?
Hi,
I have a small server (Raspberry Pi 4 in fact) and since a couple of weeks it repeatedly hangs after some days until I reboot it (after months of uptime without any problem - but I changed a few things in the meantime, so maybe load is now higher than before).
At least after installing watchdog it reboots now automatically.
Here some top output just before reboot:
load avg 20 18 12 (so: much higher than normal, e.g. 1.1 1.2 1.5 or so)
MiB Mem: 3835 total, 618 free, 1194 used, 2264 buff/cache
MiB Swap: 1024 total, 600 free, 430 used, 2640 avail
Processes with highest CPU usage:
kswapd0 with 80%
java (openhab) 41%
pg_dump 18%
Processes with highest Mem usage:
java (i.e. openhab) with 626678 virt, 22%
postgres 338824 virt
postgre 338800 virt
From my understanding, there is enough memory available, even swap usage wouldn't be necessary, since buff/cache is about 50% of physical memory. Is this correct? But than: why is it going to hang afterwards, or why this extreme load?
The only swap device available is zram, no swap partition, no swap file. The system runs on btrfs
B.M. wrote:
Hi,
I have a small server (Raspberry Pi 4 in fact) and since a couple of weeks it repeatedly hangs after some days until I reboot it (after months of uptime without any problem - but I changed a few things in the meantime,
so maybe load is now higher than before).
At least after installing watchdog it reboots now automatically.
Here some top output just before reboot:
load avg 20 18 12 (so: much higher than normal, e.g. 1.1 1.2 1.5 or so)
MiB Mem: 3835 total, 618 free, 1194 used, 2264 buff/cache
MiB Swap: 1024 total, 600 free, 430 used, 2640 avail
Processes with highest CPU usage:
kswapd0 with 80%
Swapping once is fine. Swapping continuously is bad.
java (openhab) 41%
pg_dump 18%
pg_dump should not be running continuously; if it is running too
long, you need a better way of backing up PG. Replication to
another server is usually very efficient.
Processes with highest Mem usage:
java (i.e. openhab) with 626678 virt, 22%
postgres 338824 virt
postgre 338800 virt
From my understanding, there is enough memory available, even swap usage wouldn't be necessary, since buff/cache is about 50% of physical memory.
Is
this correct? But than: why is it going to hang afterwards, or why this extreme load?
The only swap device available is zram, no swap partition, no swap file. The system runs on btrfs
Ooch. Your system is likely thrashing between using RAM for the
applications (java, PG, pg-dump) and using RAM to swap out from
RAM to compressed RAM (this does not buy you as much as you
think).
The problem, then, is that you don't have enough RAM and you don't
have enough I/O speed to solve the RAM issue temporarily, so it
becomes permanent.
Try disabling swap entirely.
If that doesn't work, you need a machine with more RAM, or you
need to be using less RAM.
-dsr-
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 151:43:49 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,815 |