Which process (kernel thread) is doing the actual compression for zswap?
1
vote
1
answer
997
views
I can imagine actually two locations:
1. In the kernel space belonging to the process whose ram is being swapped in/out
2. From
[kswapd0]
However, digging into the kswapd source (mm/vmscan.c
, init/main.c
), I could find: kswapd is single-threaded, and is started on a single thread. (Except on NUMA systems, where all the memory regions have a different kswapd. But most ordinary PCs are not NUMA systems.)
However, from now we have a problem. We can assume, that the disk is far slower than the memory, this is why we don't need a multi-threaded kswapd to handle the disk I/O. But not this is the case if we need to utilize also the internal zswap layer. Particularly from stronger compression rates (deflate), cpu can, and likely will be a bottleneck.
But kswapd is single-threaded.
Is it true?
Is being any multi-threaded kswapd planned? Is it really needed?
----
P.s. I found this thread on the linux kernel mailing list. It is about a rejected patch suggestion, what could have enabled multi-threaded kswapd on non-NUMA systems. They are talking about everything, except this zswap problem. Maybe it is unrelated.
P.s.2. Context:
1. I have a highly ram-overcommitted Linux system (processes are using far more ram than physically is available).
2. The count of the simultanously running processes is far lower than the CPU cores.
3. I am using intensively zswap.
4. In this environment, it would highly useful to use *all the available CPU cores for compress/decompress memory pages*. My current best estimation is that the page compression/decompression is being done by [kswapd0]
, which is a single kernel thread. I am investigating the options to utilize all the CPU cores for compression/decompression. Essentially, it would be a way to transform the left-over CPU capacity to compensate the lack of the physical memory.
Asked by peterh
(10459 rep)
Apr 2, 2019, 04:18 PM
Last activity: Oct 17, 2020, 12:57 PM
Last activity: Oct 17, 2020, 12:57 PM