I'm using FreeBSD 12.2 and FIO 3.24. The ioengine parameter is posixaio. Testing NVMe drives. During the initial part of our testing, we hit the unit under test with a QD of 32 and numjobs of 4 for 3 hours (randomwrite with a mix of blocksizes . Usually about 2/3rds of the way through, I notice the 4 processes (one by one) go from state aiospn usually using 5-10% CPU to state CPUnnn at 100 %CPU. vfs.aio values below.
The question is who is the guilty party? FreeBSD vs FIO? I'm taking a guess that someone isn't handling a dropped I/O request well.
vfs.aio.max_buf_aio: 8192
vfs.aio.max_aio_queue_per_proc: 65536
vfs.aio.max_aio_per_proc: 8192
vfs.aio.aiod_lifetime: 30000
vfs.aio.num_unmapped_aio: 0
vfs.aio.num_buf_aio: 0
vfs.aio.num_queue_count: 0
vfs.aio.max_aio_queue: 65536
vfs.aio.target_aio_procs: 4
vfs.aio.num_aio_procs: 4
vfs.aio.max_aio_procs: 32
vfs.aio.unsafe_warningcnt: 1
vfs.aio.enable_unsafe: 0
Asked by jim feldman
(31 rep)
Feb 3, 2021, 09:29 PM