Expected behaviour of GNU parallel --memfree <size> and --memsuspend <size> when size much bigger than RAM
1
vote
1
answer
57
views
While experimenting with GNU parallel I found that the following cases all hang with decreasing CPU usage on a Fedora 41 VM with 8GB RAM. Is this expected behaviour?
parallel --halt now,fail=1 --timeout 2s --memfree 30G echo ::: a b c
parallel --halt now,fail=1 --timeout 2s --memsuspend 30G echo ::: a b c
parallel --timeout 2s --memsuspend 30G echo ::: a b c
parallel --timeout 2s --memfree 30G echo ::: a b c
I'd have expected at least the first or second command to actually timeout and exit with errorcode 3. [strace log](https://paste.centos.org/view/5d24131f ) that shows it's basically spinning and continuously reading /proc/meminfo with an awk subprocess which is in line with expected behaviour (memfreescript
) even though it seems pretty wasteful every 1 second.
**Why does it allow --memfree and --memsuspend values much greater than physical RAM ?**
Could someone also clarify this section in the manual for --memfree. Does it mean the youngest *running* job would be killed?
> If the jobs take up very different amount of RAM, GNU parallel will only start as many as there is memory
> for. If less than size bytes are free, no more jobs will be started. If less than 50% size bytes are free,
> the youngest job will be killed (as per --term-seq), and put back on the queue to be run later.
kill_youngster_if_not_enough_mem
code is relevant but isn't something I quite grasp in relation to the full GNU parallel codebase.
parallel --version GNU parallel 20241222
uname -a Linux host 6.11.4-301.fc41.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Oct 20 15:02:33 UTC 2024 x86_64 GNU/Linux
Asked by Somniar
(113 rep)
May 31, 2025, 05:47 PM
Last activity: May 31, 2025, 09:36 PM
Last activity: May 31, 2025, 09:36 PM