Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
2
answers
374
views
How to use vm.overcommit_memory=1 without getting system hung?
I am using ```vm.overcommit_memory=1``` on my linux system which has been helpful to allow starting multiple applications which otherwise wouldn't even start with default value of 0, however, sometimes my system just freezes and seems the OOM killer is unable to do anything to prevent this situation...
I am using
.overcommit_memory=1
on my linux system which has been helpful to allow starting multiple applications which otherwise wouldn't even start with default value of 0, however, sometimes my system just freezes and seems the OOM killer is unable to do anything to prevent this situation. I have some swap memory which also got consumed. I've also noticed some instances when system is unresponsive, even the magic SysRq keys don't work. Sorry, no logs are available at this time to include here.
In general, is there any configuration or tunable that can get the OOM killer to kill the highest memory consuming process(es) immediately without ever letting the system go unresponsive when using .overcommit_memory=1
?
eagle007
(3 rep)
Nov 27, 2024, 10:10 PM
• Last activity: Jul 10, 2025, 10:16 AM
0
votes
1
answers
399
views
NFS v4.2 tuning
https://www.youtube.com/watch?v=JXASmxGrHvY at 5:30 the statement is made > if you get NFS tuned just right it is incredibly fast for **ultra small file transfers***... at 6:05 > I've heard of 4.0GB/sec using a sequential read... but you have to have all the infrastructure tuned just right. What, wh...
https://www.youtube.com/watch?v=JXASmxGrHvY
at 5:30 the statement is made
> if you get NFS tuned just right it is incredibly fast for **ultra small file transfers***...
at 6:05
> I've heard of 4.0GB/sec using a sequential read... but you have to have all the infrastructure tuned just right.
What, where, and how do I tune NFS v4.2 in **RHEL-8.10 or later** to achieve what is claimed above? Is this claim true, which this youtube vid seems to be made 3 years ago?
Is there any good *NFS tuning* documentation as it pertains to NFS v4.2 in RHEL 8/9 or equivalent today?
*v4.2 is the latest of NFS correct? Is there any proposed newer version of NFS on the horizon?*
**are there any better settings than the default in
/etc/nfs.conf
and /etc/nfsmount.conf
?**
**if I can place a bounty on this I will --> what is the max transfer speed in GB/sec that should be had, in RHEL-8.10 or later, over 100gbps infiniband, on NFS v4.2 (assuming RDMA?) and all the "tuned" options ??** The only *tuning* I am aware of is putting rdma
into effect over infiniband; if someone else knows better/more let me know.
ron
(8647 rep)
Dec 17, 2024, 05:25 PM
• Last activity: Dec 17, 2024, 08:38 PM
16
votes
2
answers
19346
views
How to change the length of time-slices used by the Linux CPU scheduler?
Is it possible to increase the length of time-slices, which the Linux CPU scheduler allows a process to run for? How could I do this? ## Background knowledge This question asks how to reduce how frequently the kernel will force a switch between different processes running on the same CPU. This is th...
Is it possible to increase the length of time-slices, which the Linux CPU scheduler allows a process to run for? How could I do this?
## Background knowledge
This question asks how to reduce how frequently the kernel will force a switch between different processes running on the same CPU. This is the kernel feature described as "pre-emptive multi-tasking". This feature is generally good, because it stops an individual process hogging the CPU and making the system completely non-responsive. However switching between processes [has a cost](https://stackoverflow.com/questions/21887797/what-is-the-overhead-of-a-context-switch/37428530#37428530) , therefore there is a tradeoff.
If you have one process which uses all the CPU time it can get, and another process which interacts with the user, then switching more frequently can reduce delayed responses.
If you have two processes which use all the CPU time they can get, then switching less frequently can allow them to get more work done in the same time.
## Motivation
I am posting this based on my initial reaction to the question https://unix.stackexchange.com/questions/466374/how-to-change-linux-context-switch-frequency/
I do not personally want to change the timeslice. However I vaguely remember this being a thing, with the
CONFIG_HZ
build-time option. So I want to know what the current situation is. Is the CPU scheduler time-slice still based on CONFIG_HZ
?
Also, in practice build-time tuning is very limiting. For Linux distributions, it is much more practical if they can have a single kernel per CPU architecture, and allow configuring it at runtime or at least at boot-time. If tuning the time-slice is still relevant, is there is a new method which does not lock it down at build-time?
sourcejedi
(53222 rep)
Sep 4, 2018, 09:19 AM
• Last activity: Oct 26, 2024, 02:40 PM
1
votes
0
answers
80
views
sysctl prameters seem not working on a VM
I am trying to tune my servers performance. To do so, I wanted to test some sysctl prameters such as `net.core.somaxconn`, `net.ipv4.tcp_max_syn_backlog` and `net.core.netdev_max_backlog`. Following were my setup: - A `4vCPU` server running nginx docker container as a test web server which serves a...
I am trying to tune my servers performance. To do so, I wanted to test some sysctl prameters such as
net.core.somaxconn
, net.ipv4.tcp_max_syn_backlog
and net.core.netdev_max_backlog
.
Following were my setup:
- A 4vCPU
server running nginx docker container as a test web server which serves a static page.
- one 16vCPU
, one 8vCPU
and one 4vCPU
as my load generators on nginx server using locust
framework.
With default sysctl parameters on webserver VM, I could get about a 15k
RPS. After changing net.core.somaxconn
, net.ipv4.tcp_max_syn_backlog
and net.core.netdev_max_backlog
to either higher values or lower values the results didn't change. I even set them to 1
and still 15k
RPS. Then I ran netstat -s | grep LISTEN
to see if some packets dropped because of full syn
of accept
queues, and I expected something like 123456 SYNs to LISTEN sockets dropped
from the output. But it didn't output anything.
I tested some other sysctl prameters. For instance I set net.ipv4.tcp_keepalive_time
, net.ipv4.tcp_keepalive_probes
and net.ipv4.tcp_keepalive_intvl
all to 0. But still after running netstat -a | grep -i wait
, I could see a lot of connections are in TIME_WAIT
state. Is there something wrong? Can you please help me with that?
Ssaf
(11 rep)
Nov 10, 2023, 03:10 PM
0
votes
1
answers
991
views
How to minimize filesystem overhead
I have an application that uses a lot of space as essentially cache data. The more cache available the better the application performs. We're talking hundreds to thousands of TB. The application can regenerate the data on-the-fly if blocks go bad, so my primary goal is to maximize the size available...
I have an application that uses a lot of space as essentially cache data. The more cache available the better the application performs. We're talking hundreds to thousands of TB. The application can regenerate the data on-the-fly if blocks go bad, so my primary goal is to maximize the size available on my filesystem for cache data, and intensely minimize the filesystem overhead.
I'm willing to sacrifice all reliability and flexibility as well as "general-purposeness" requirements. On top of that, I know exactly how many files of cache data I will have on any given volume because the application writes cache files with a fixed size. I'd like to be able to overwrite a file with a new one if a block goes bad occasionally, so it might be good to have a *few* spare inodes lying around, it is also feasible to reformat the entire volume if needed. The files are all stored 1 directory deep in the filesystem. Directory names can be capped at a single letter, for example, and I don't need the directory either (all files could just as well be stored top-level on the root of the volume). Once the cache data is written, the files will only ever be read and the volume can be mounted read-only. The cache is valid for a *long* time (years).
So, given that I know the exact, fixed, file size and have no reliability, checksum, journaling, etc. concerns, what filesystem should I use and how should I go about tuning it to eliminate as much overhead as possible?
David Cowden
(138 rep)
Nov 21, 2021, 05:51 PM
• Last activity: Nov 21, 2021, 06:54 PM
2
votes
1
answers
474
views
Where to get offline documentation/descriptions of individual sysctl kernel tunable parameters?
The `NOTES` section of `$ man 5 sysctl.conf` states: `The description of individual parameters can be found in the kernel documentation.` But is there a way for me to find this kernel documentation offline? Is it a package that I'd need to install? For example, I came across the `kernel.panic` param...
The
NOTES
section of $ man 5 sysctl.conf
states: The description of individual parameters can be found in the kernel documentation.
But is there a way for me to find this kernel documentation offline? Is it a package that I'd need to install?
For example, I came across the kernel.panic
parameter, which on my system is set to 0 by default. Looking it up online [here](https://www.kernel.org/doc/Documentation/sysctl/kernel.txt) , it's described to be:
panic:
The value in this file represents the number of seconds the kernel
waits before rebooting on a panic. When you use the software watchdog,
the recommended setting is 60.
But there is no reasonable way I'd guessed that the 0
there referred to 0 seconds till auto reboot without searching it up online.
Jethro Cao
(135 rep)
Mar 28, 2021, 12:42 AM
• Last activity: Mar 29, 2021, 04:11 PM
1
votes
1
answers
7286
views
What's the best dirty_background_ratio and dirty_ratio for my usage?
So I'm playing with `dirty_background_ratio and dirty_ratio` and hoping to find the right parameters with your professional help. For now I'm using: `vm.dirty_background_ratio = 20 vm.dirty_ratio = 60` The main usage is torrenting, that means that the files will be download through torrent client an...
So I'm playing with
dirty_background_ratio and dirty_ratio
and hoping to find the right parameters with your professional help.
For now I'm using:
`vm.dirty_background_ratio = 20
vm.dirty_ratio = 60`
The main usage is torrenting, that means that the files will be download through torrent client and then seeded. Possibility of many downloads at once, that's why I should use RAM caching, thinking off correct values.
Maybe you could suggest me the right values?
Viktor
(217 rep)
Apr 13, 2020, 04:24 AM
• Last activity: Apr 13, 2020, 01:05 PM
1
votes
1
answers
882
views
How to find max limit of /proc/sys/fs/file-max
I am running Jenkins with lots of jobs which require lots of open files so I have increased `file-max` limit to 3 million. It still hits 3 million sometimes so I am wondering how far I can go. Can I set `/proc/sys/fs/file-max` to 10 million? How do I know what the hard limit of `file-max` is? I am r...
I am running Jenkins with lots of jobs which require lots of open files so I have increased
file-max
limit to 3 million. It still hits 3 million sometimes so I am wondering how far I can go. Can I set /proc/sys/fs/file-max
to 10 million?
How do I know what the hard limit of file-max
is?
I am running CentOS 7.7
(3.10.X kernel)
Satish
(1672 rep)
Nov 11, 2019, 02:28 AM
• Last activity: Nov 18, 2019, 10:25 AM
3
votes
0
answers
898
views
tune2fs for f2fs
Is there a tool for editing (and viewing) the file system features for f2fs just like tune2fs for ext234? For instance you can get summarized info about the corresponding extX partition: # tune2fs -l /dev/path/to/disk To enable checksums on an existing ext4 filesystem: # tune2fs -O metadata_csum /de...
Is there a tool for editing (and viewing) the file system features for f2fs just like tune2fs for ext234?
For instance you can get summarized info about the corresponding extX partition:
# tune2fs -l /dev/path/to/disk
To enable checksums on an existing ext4 filesystem:
# tune2fs -O metadata_csum /dev/path/to/disk
user252842
Jul 3, 2019, 09:43 AM
• Last activity: Jul 3, 2019, 11:00 AM
3
votes
0
answers
2733
views
Tuning NFS: Identifying source of getattr requests
we run some Linux NFS clients against a NetApp NFS storage system. Currently we have over 50% getattr requests running against the NetApp using up precious CPU cycles. Using the netapp-top.pl script we identified the hosts causing high number of NFS operations. On the Linux NFS Client we used nmon t...
we run some Linux NFS clients against a NetApp NFS storage system. Currently we have over 50% getattr requests running against the NetApp using up precious CPU cycles.
Using the netapp-top.pl script we identified the hosts causing high number of NFS operations.
On the Linux NFS Client we used nmon to identify the type of NFS operations (using 'N' option).
Using the nocto and the actimeo mount option on per Linux client dedicated shares we were already able to reduce the number of NFS getattr requests.
But we have one Linux NFS client where we are not able to identify the mount causing the getattr requests.
How can we identify the mountpoint and possibly the process causing those getattr NFS operations?
Is there a way to identify the getattr requests running against the NetApp on the NetApp itself - like a top list of files being accessed using getattr?
Thanks
Rainer
Rainer Stumbaum
(31 rep)
Dec 5, 2014, 09:33 AM
• Last activity: Nov 15, 2018, 10:55 PM
1
votes
0
answers
85
views
is there any way to increase spawn rate of MinSpareServers in apache prefork
My server faces huge traffic only for short duration every day. And I don't want to waste my resource by running high amount of MaxSpareServers throughout the day. So, I would like to know is there any way to increase the spawn rate by the exponential of 100s... means increase rate should be 100,200...
My server faces huge traffic only for short duration every day. And I don't want to waste my resource by running high amount of MaxSpareServers throughout the day.
So, I would like to know is there any way to increase the spawn rate by the exponential of 100s... means increase rate should be 100,200,400,800...
My configuration :

Siva
(9242 rep)
Apr 11, 2018, 12:04 PM
• Last activity: Apr 11, 2018, 12:18 PM
4
votes
1
answers
2277
views
CentOS 5 - hdparm - how to set DMA mode
I'm running CentOS 5 with a PATA hard drive. I've used *hdparm* to tune the hard disk for better performance, but there are 2 settings that don't work: hdparm -M 254 /dev/hda gives the error HDIO_DRIVE_CMD:ACOUSTIC failed: Input/output error and hdparm -d1 /dev/hda gives the error HDIO_SET_DMA faile...
I'm running CentOS 5 with a PATA hard drive. I've used *hdparm* to tune the hard disk for better performance, but there are 2 settings that don't work:
hdparm -M 254 /dev/hda
gives the error
HDIO_DRIVE_CMD:ACOUSTIC failed: Input/output error
and
hdparm -d1 /dev/hda
gives the error
HDIO_SET_DMA failed: Operation not permitted
What do I need to check to set these? It's already old hardware so anything I can do to squeeze out more performance would be helpful.
Thanks.
By request, here is the output of hdparm -iI /dev/hda and cat /proc/ide/hda/settings
DMA and acoustic settings do exist but I just can't set them successfully. Here is the output:
[root@hptest ~]# hdparm -iI /dev/hda
/dev/hda:
Model=ST3500320AS, FwRev=SD15, SerialNo=9QM6WHGY
Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
BuffType=unknown, BuffSize=0kB, MaxMultSect=16, MultSect=off
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
AdvancedPM=no WriteCache=enabled
Drive conforms to: unknown: ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7
* signifies the current active mode
ATA device, with non-removable media
Model Number: ST3500320AS
Serial Number: 9QM6WHGY
Firmware Revision: SD15
Transport: Serial
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 65535
heads 16 1
sectors/track 63 63
--
CHS current addressable sectors: 4128705
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 976773168
device size with M = 1024*1024: 476940 MBytes
device size with M = 1000*1000: 500107 MBytes (500 GB)
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Recommended acoustic management value: 254, current value: 0
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* 64-bit World wide name
* Write-Read-Verify feature set
* WRITE_UNCORRECTABLE command
* {READ,WRITE}_DMA_EXT_GPL commands
* SATA-I signaling speed (1.5Gb/s)
* SATA-II signaling speed (3.0Gb/s)
* Native Command Queueing (NCQ)
* Phy event counters
* Software settings preservation
Security:
Master password revision code = 65534
supported
not enabled
not locked
not frozen
not expired: security count
supported: enhanced erase
102min for SECURITY ERASE UNIT. 102min for ENHANCED SECURITY ERASE UNIT.
Checksum: correct
[root@hptest ~]# cat /proc/ide/hda/settings
name value min max mode
---- ----- --- --- ----
acoustic 0 0 254 rw
address 1 0 2 rw
bios_cyl 60801 0 65535 rw
bios_head 255 0 255 rw
bios_sect 63 0 63 rw
bswap 0 0 1 r
current_speed 0 0 70 rw
failures 0 0 65535 rw
init_speed 0 0 70 rw
io_32bit 0 0 3 rw
keepsettings 0 0 1 rw
lun 0 0 7 rw
max_failures 1 0 65535 rw
multcount 0 0 16 rw
nice1 1 0 1 rw
nowerr 0 0 1 rw
number 0 0 3 rw
pio_mode write-only 0 255 w
unmaskirq 0 0 1 rw
using_dma 0 0 1 rw
wcache 1 0 1 rw
[root@hptest ~]#
Tensigh
(341 rep)
Apr 24, 2014, 04:48 AM
• Last activity: Nov 10, 2017, 05:07 AM
2
votes
1
answers
1233
views
CentOS 7 - When THP is disabled is it safe to ignore defrag setting?
I need to disable THP (Transparent Huge Pages). Many tutorials on the web advise to set `never` (`0` for last one) value for below options. - /sys/kernel/mm/transparent_hugepage/enabled - /sys/kernel/mm/transparent_hugepage/defrag - /sys/kernel/mm/transparent_hugepage/khugepaged/defrag My question i...
I need to disable THP (Transparent Huge Pages). Many tutorials on the web advise to set
never
(0
for last one) value for below options.
- /sys/kernel/mm/transparent_hugepage/enabled
- /sys/kernel/mm/transparent_hugepage/defrag
- /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
My question is - since THP is going to be disabled, is it important to disable defrag options as well? Can I consider last 2 options non important in this case? I couldn't find any docs with confirmation.
waste
(151 rep)
Jun 13, 2017, 05:28 AM
• Last activity: Jun 21, 2017, 12:00 PM
2
votes
1
answers
685
views
slow/frozen ext4 // task sync blocked on big mostly write only server
We have several 90TB servers (Areca RAID-6 partitioned into 10 ext4 partitions) The application is basically a ring buffer; continuously writing data and deleting old data. As such, each partition is always 100% full (15GB per partition held as headroom). Now we're seeing the writing application seg...
We have several 90TB servers (Areca RAID-6 partitioned into 10 ext4 partitions)
The application is basically a ring buffer; continuously writing data and deleting old data. As such, each partition is always 100% full (15GB per partition held as headroom).
Now we're seeing the writing application segfault because (I suppose) it cannot write to disk fast enough.
The app segfaults happen about the same time as this error (of which there are several):
Nov 26 11:33:10 localhost kernel: INFO: task sync:30312 blocked for more than 120 seconds.
Nov 26 11:33:10 localhost kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 26 11:33:10 localhost kernel: sync D f63a4ec0 0 30312 6161 0x00000080
Nov 26 11:33:10 localhost kernel: f571fe9c 00000086 d9b29930 f63a4ec0 d9b29930 c18d5ec0 c18d5ec0 cca87419
Nov 26 11:33:10 localhost kernel: 0005c511 c18d5ec0 c18d5ec0 cca7d888 0005c511 c18d5ec0 f63b2ec0 e3abe130
Nov 26 11:33:10 localhost kernel: c107d121 00000001 00000046 00000000 d9b29d52 d9b29930 f3544d00 f3c62800
Nov 26 11:33:10 localhost kernel: Call Trace:
Nov 26 11:33:10 localhost kernel: [] ? try_to_wake_up+0x1d1/0x230
Nov 26 11:33:10 localhost kernel: [] ? wake_up_process+0x1f/0x40
Nov 26 11:33:10 localhost kernel: [] ? wake_up_worker+0x1e/0x30
Nov 26 11:33:10 localhost kernel: [] ? insert_work+0x58/0x90
Nov 26 11:33:10 localhost kernel: [] schedule+0x23/0x60
Nov 26 11:33:10 localhost kernel: [] schedule_timeout+0x155/0x1d0
Nov 26 11:33:10 localhost kernel: [] ? __switch_to+0xee/0x370
Nov 26 11:33:10 localhost kernel: [] ? __queue_delayed_work+0x91/0x150
Nov 26 11:33:10 localhost kernel: [] wait_for_completion+0x71/0xc0
Nov 26 11:33:10 localhost kernel: [] ? try_to_wake_up+0x230/0x230
Nov 26 11:33:10 localhost kernel: [] sync_inodes_sb+0x7c/0xb0
Nov 26 11:33:10 localhost kernel: [] sync_inodes_one_sb+0x15/0x20
Nov 26 11:33:10 localhost kernel: [] iterate_supers+0xa8/0xb0
Nov 26 11:33:10 localhost kernel: [] ? fdatawrite_one_bdev+0x20/0x20
Nov 26 11:33:10 localhost kernel: [] sys_sync+0x31/0x80
Nov 26 11:33:10 localhost kernel: [] sysenter_do_call+0x12/0x12
fstab mounts the partitions as
ext4 noauto,rw,users,exec 0 0
System is 32 bit Centos 6.6 with 3.10.80-1 kernel.
**Question:** Is this some kind of disk corruption problem or is there something I need to tune in Linux or the filesystem to fix this? The application needs to run 24x7, forever...
Danny
(653 rep)
Nov 28, 2016, 06:09 PM
• Last activity: Nov 29, 2016, 01:24 AM
2
votes
1
answers
2307
views
How can I create my own custom progress bar in Conky?
I have started using Conky some days ago, and I'm willing to create my own configuration. I have added some colors, cool ASCII art and learned the basics. However, I don't like the default progress bars coming with Conky, and I would like to create something like a string of 50 '#' signs or 'rectang...
I have started using Conky some days ago, and I'm willing to create my own configuration. I have added some colors, cool ASCII art and learned the basics.
However, I don't like the default progress bars coming with Conky, and I would like to create something like a string of 50 '#' signs or 'rectangles' (219th character in the ASCII table), being the first 20 green, the following 20 yellow and the last 10 red.
I'd like to implement it as a
with a pretty similar result.
I am running AwesomeWM in Arch Linux, and my Conky version is 1.10.5.
fs_bar
, being green when having plenty of free space, yellow when it's half full and red when I should free some files, but showing the three colours in the two last cases. I'm attaching an 
xvlaze
(289 rep)
Nov 26, 2016, 06:35 PM
• Last activity: Nov 27, 2016, 05:55 PM
4
votes
0
answers
500
views
systemd slices even with low CPUShare heavily affect system responsiveness
I've created custom slice (so now I have 4 slices user, system, machine, important) and assigned huge number to `CPUShares`. System felt really unresponsive under high load in this slice, what seems to be logical considering enormous `CPUShares` value. However later I've set `CPUShares` to really ti...
I've created custom slice (so now I have 4 slices user, system, machine, important) and assigned huge number to
CPUShares
. System felt really unresponsive under high load in this slice, what seems to be logical considering enormous CPUShares
value.
However later I've set CPUShares
to really tiny value (64
comparing to default 4096
for user.slice
and system.slice
) And to be honest system also felt quite unresponsive, maybe not that much but still it was really annoying. So as the result cpu load from important.slice
was tiny comparing to other slices (around 11%) but everything felt terribly unresponsive.
What I mean by unresponsive is that the same app running in user.slice despite using much more CPU affected other processes in user.slice
significantly less than the same process running in important.slice
. For example:
Running Blender renderer in user.slice
on all 8 cores under 100% load didn't make system feel unresponsive at all. _User experience_ remained really good and PC remained capable of performing other tasks.
Running Blender renderer in important.slice
with low CPUShares
when utilizing only 11% CPU made whole system run painfully slow, even tty was lagging.
Of course CPUAccounting
is enabled everywhere.
Lapsio
(1363 rep)
Aug 18, 2016, 05:13 PM
• Last activity: Aug 27, 2016, 12:08 AM
1
votes
1
answers
281
views
Fast filesystem for virtual backup environments
I know filesystem supporters war is behind the corner and there are plenty of fine tuning guide for each filesystem in the wild, so please stick to the specifications: I'm asking what would you suggest to go FAST (R/W) with LARGE files, while reliability is not prioritary. Real world scenario: copy...
I know filesystem supporters war is behind the corner and there are plenty of fine tuning guide for each filesystem in the wild, so please stick to the specifications: I'm asking what would you suggest to go FAST (R/W) with LARGE files, while reliability is not prioritary.
Real world scenario: copy of large VMs images (2++ TB) from backup target storage to local (sata) storage.
Already tried: ZFS(onlinux) on backup target, but pools are getting fragmented and heavily slowed down I/O (both R/W) after a few months of work.
Next in row for testing: XFS over LVM for target storage, seems like the obvious choice... Or not? Any good advice about that? Thank you!
realpclaudio
(565 rep)
Apr 11, 2016, 11:52 AM
• Last activity: Apr 19, 2016, 03:08 PM
3
votes
1
answers
2885
views
Does dm-multipath schedule I/O?
I have a multipath device I'm interested in: [root@xxx dm-7]# multipath -ll mpathf mpathf (3600601609f013300227e5b5b3790e411) dm-7 DGC,VRAID size=3.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='round-robin 0' prio=130 status=active | |- 7:0:1:5 sdl 8:176 active ready running...
I have a multipath device I'm interested in:
[root@xxx dm-7]# multipath -ll mpathf
mpathf (3600601609f013300227e5b5b3790e411) dm-7 DGC,VRAID
size=3.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| |- 7:0:1:5 sdl 8:176 active ready running
| `- 8:0:1:5 sdx 65:112 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 7:0:0:5 sdf 8:80 active ready running
`- 8:0:0:5 sdr 65:16 active ready running
So it looks like the block devices backing this path are
sdf
, sdr
, sdl
, and sdx
. Just taking sdf
as an example I've set it's I/O scheduler as being noop
:
[root@xxx dm-7]# cat /sys/block/sdf/queue/scheduler
[noop] anticipatory deadline cfq
The mpathf
device maps to /dev/dm-7
for the actual block device. I've just noticed that this has an I/O scheduler as well:
[root@xxx dm-7]# cat /sys/block/dm-7/queue/scheduler
noop anticipatory deadline [cfq]
**Question:** which one takes precedence? The scheduler on the multipath device or on the device it ends up relaying the I/O through?
I'm of course assuming that IOPs aren't scheduled twice (once for the mpath device and another for the individual block device the I/O is redirected into).
Bratchley
(17244 rep)
Jan 27, 2015, 10:26 PM
• Last activity: Mar 2, 2015, 01:19 AM
-1
votes
1
answers
1833
views
Make TCP to reconnect automatically and quickly
In my Debian Linux system `apt-get dist-upgrade` is sometimes stalled with `[Waiting for headers]` message. To make it pass faster I press Ctrl + C and start the command again. How to tune the system to avoid stalling and disconnect and reconnect automatically when a stalled connection may be detect...
In my Debian Linux system
apt-get dist-upgrade
is sometimes stalled with [Waiting for headers]
message.
To make it pass faster I press Ctrl+C and start the command again.
How to tune the system to avoid stalling and disconnect and reconnect automatically when a stalled connection may be detected?
porton
(2226 rep)
Dec 23, 2014, 08:20 PM
• Last activity: Dec 24, 2014, 07:16 AM
2
votes
1
answers
1034
views
Memory Usage in Linux
How to setup limit for process to memory usage? Similar to open files limit in `/etc/security/limits.conf`: ubuntu soft nofile 4096 ubuntu hard nofile 8192 E.g. while I'm launching python script with raw `eval` of json data from 1.1G file, python takes whole of RAM, while creating objects around eac...
How to setup limit for process to memory usage?
Similar to open files limit in
/etc/security/limits.conf
:
ubuntu soft nofile 4096
ubuntu hard nofile 8192
E.g. while I'm launching python script with raw eval
of json data from 1.1G file, python takes whole of RAM, while creating objects around each dict
and list
in json.txt
. It hung my machine for 20-30 minutes. Thereafter:
# python read_data.py
Killed
Ubuntu System is very stable today. Its recovery from hung. Swap going to 8Gb of usage. RAM completely empty, script going off.
I'm trying to find, is this limit, which killed
my script configurable? Can I tune my system in such way, that every process, which takes more than 70% of current RAM size would be just killed or stopped or something.
youblind
(23 rep)
Aug 29, 2014, 11:08 AM
• Last activity: Aug 29, 2014, 11:41 AM
Showing page 1 of 20 total questions