Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
2
answers
97
views
swap on zfs blocks system
Over the past few years I have had two laptops exhibiting the same behaviour. One was fixed by re-installation. The other is currently broken. The issue is that as soon as the laptop starts swapping it feels as if it freezes, when sitting at it. Remotely you can see it does not freeze completely: Bo...
Over the past few years I have had two laptops exhibiting the same behaviour.
One was fixed by re-installation.
The other is currently broken.
The issue is that as soon as the laptop starts swapping it feels as if it freezes, when sitting at it.
Remotely you can see it does not freeze completely:
Both run swap on LUKS encrypted partition with zfs on-top.
The new system has an additional 8 GB swap, that works as expected: The freezing starts when the 8 GB is used.
~~~
$ cat /proc/swaps
Filename Type Size Used Priority
/dev/zd16 partition 31457276 3816 -3
/dev/dm-1 partition 8388604 494612 -2
~~~
top
says:
~~~
top - 15:48:30 up 22 min, 4 users, load average: 24.41, 8.42, 4.94
Tasks: 436 total, 13 running, 352 sleeping, 71 stopped, 0 zombie
%Cpu(s): 0.0 us,100.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem : 62.2 total, 0.4 free, 61.9 used, 0.1 buff/cache
GiB Swap: 38.0 total, 29.7 free, 8.3 used. 0.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
22433 root 0 -20 0 0 0 R 82.6 0.0 0:17.96 zvol
21398 root 0 -20 0 0 0 R 80.2 0.0 0:18.93 z_wr_int
21395 root 1 -19 0 0 0 R 65.3 0.0 0:13.66 z_wr_iss
22420 root 0 -20 0 0 0 R 60.2 0.0 0:20.14 zvol
445 root 1 -19 0 0 0 R 56.2 0.0 0:25.17 z_wr_iss
21396 root 1 -19 0 0 0 R 45.1 0.0 0:13.17 z_wr_iss
1720 root 20 0 0 0 0 R 1.0 0.0 0:00.21 txg_sync
19644 root 20 0 0 0 0 D 0.8 0.0 0:04.56 kworker/u8:3+even+
19788 root 20 0 0 0 0 R 0.8 0.0 0:05.43 kworker/u8:6+flus+
~~~
dmesg --follow
says:
~~~
[ 1476.236032] INFO: task vdev_autotrim:502 blocked for more than 122 seconds.
[ 1476.236052] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1476.236057] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1476.236060] task:vdev_autotrim state:D stack:0 pid:502 tgid:502 ppid:2 flags:0x00004000
[ 1476.236068] Call Trace:
[ 1476.236073]
[ 1476.236080] __schedule+0x27c/0x6b0
[ 1476.236093] schedule+0x33/0x110
[ 1476.236100] cv_wait_common+0x102/0x140 [spl]
[ 1476.236220] ? __pfx_autoremove_wake_function+0x10/0x10
[ 1476.236230] __cv_wait+0x15/0x30 [spl]
[ 1476.236263] vdev_autotrim_wait_kick+0x4d/0xb0 [zfs]
[ 1476.237396] vdev_autotrim_thread+0x704/0x7b0 [zfs]
[ 1476.238429] ? __pfx_vdev_autotrim_thread+0x10/0x10 [zfs]
[ 1476.239451] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[ 1476.239503] thread_generic_wrapper+0x5f/0x70 [spl]
[ 1476.239540] kthread+0xf2/0x120
[ 1476.239548] ? __pfx_kthread+0x10/0x10
[ 1476.239553] ret_from_fork+0x47/0x70
[ 1476.239560] ? __pfx_kthread+0x10/0x10
[ 1476.239564] ret_from_fork_asm+0x1b/0x30
[ 1476.239573]
[ 1599.259263] INFO: task zvol:385 blocked for more than 123 seconds.
[ 1599.259282] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.259286] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.259290] task:zvol state:D stack:0 pid:385 tgid:385 ppid:2 flags:0x00004000
[ 1599.259299] Call Trace:
[ 1599.259304]
[ 1599.259310] __schedule+0x27c/0x6b0
[ 1599.259323] schedule+0x33/0x110
[ 1599.259327] schedule_preempt_disabled+0x15/0x30
[ 1599.259332] rwsem_down_read_slowpath+0x284/0x4d0
[ 1599.259343] ? __list_add+0x17/0x40 [zfs]
[ 1599.260419] down_read+0x48/0xd0
[ 1599.260433] dbuf_dirty+0x4b9/0x730 [zfs]
[ 1599.261429] dmu_buf_will_dirty_impl+0xd2/0x1b0 [zfs]
[ 1599.262416] dmu_buf_will_dirty+0x16/0x30 [zfs]
[ 1599.263402] dmu_write_uio_dnode+0x92/0x170 [zfs]
[ 1599.264394] zvol_write+0x250/0x400 [zfs]
[ 1599.265386] zvol_write_task+0x12/0x30 [zfs]
[ 1599.266378] taskq_thread+0x1f6/0x3c0 [spl]
[ 1599.266427] ? __pfx_default_wake_function+0x10/0x10
[ 1599.266440] ? __pfx_zvol_write_task+0x10/0x10 [zfs]
[ 1599.267440] ? __pfx_taskq_thread+0x10/0x10 [spl]
[ 1599.267488] kthread+0xf2/0x120
[ 1599.267496] ? __pfx_kthread+0x10/0x10
[ 1599.267500] ret_from_fork+0x47/0x70
[ 1599.267506] ? __pfx_kthread+0x10/0x10
[ 1599.267510] ret_from_fork_asm+0x1b/0x30
[ 1599.267517]
[ 1599.267529] INFO: task txg_sync:492 blocked for more than 123 seconds.
[ 1599.267537] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.267542] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.267545] task:txg_sync state:D stack:0 pid:492 tgid:492 ppid:2 flags:0x00004000
[ 1599.267553] Call Trace:
[ 1599.267555]
[ 1599.267558] __schedule+0x27c/0x6b0
[ 1599.267564] ? default_wake_function+0x1a/0x40
[ 1599.267572] schedule+0x33/0x110
[ 1599.267577] taskq_wait+0x9c/0xd0 [spl]
[ 1599.267608] ? __pfx_autoremove_wake_function+0x10/0x10
[ 1599.267615] dmu_objset_sync+0x30d/0x4b0 [zfs]
[ 1599.268611] dsl_dataset_sync+0x5e/0x200 [zfs]
[ 1599.269610] dsl_pool_sync+0x9b/0x410 [zfs]
[ 1599.270697] spa_sync_iterate_to_convergence+0xde/0x220 [zfs]
[ 1599.271737] spa_sync+0x321/0x620 [zfs]
[ 1599.272767] txg_sync_thread+0x1e7/0x250 [zfs]
[ 1599.273791] ? __pfx_txg_sync_thread+0x10/0x10 [zfs]
[ 1599.274846] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[ 1599.274895] thread_generic_wrapper+0x5f/0x70 [spl]
[ 1599.274941] kthread+0xf2/0x120
[ 1599.274949] ? __pfx_kthread+0x10/0x10
[ 1599.274955] ret_from_fork+0x47/0x70
[ 1599.274960] ? __pfx_kthread+0x10/0x10
[ 1599.274965] ret_from_fork_asm+0x1b/0x30
[ 1599.274973]
[ 1599.274979] INFO: task vdev_autotrim:502 blocked for more than 246 seconds.
[ 1599.274986] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.274992] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.274995] task:vdev_autotrim state:D stack:0 pid:502 tgid:502 ppid:2 flags:0x00004000
[ 1599.275003] Call Trace:
[ 1599.275007]
[ 1599.275010] __schedule+0x27c/0x6b0
[ 1599.275018] schedule+0x33/0x110
[ 1599.275024] cv_wait_common+0x102/0x140 [spl]
[ 1599.275055] ? __pfx_autoremove_wake_function+0x10/0x10
[ 1599.275064] __cv_wait+0x15/0x30 [spl]
[ 1599.275094] vdev_autotrim_wait_kick+0x4d/0xb0 [zfs]
[ 1599.276136] vdev_autotrim_thread+0x704/0x7b0 [zfs]
[ 1599.277156] ? __pfx_vdev_autotrim_thread+0x10/0x10 [zfs]
[ 1599.278171] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[ 1599.278224] thread_generic_wrapper+0x5f/0x70 [spl]
[ 1599.278260] kthread+0xf2/0x120
[ 1599.278268] ? __pfx_kthread+0x10/0x10
[ 1599.278273] ret_from_fork+0x47/0x70
[ 1599.278279] ? __pfx_kthread+0x10/0x10
[ 1599.278283] ret_from_fork_asm+0x1b/0x30
[ 1599.278292]
[ 1599.278312] INFO: task vdev_autotrim:1730 blocked for more than 123 seconds.
[ 1599.278319] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.278324] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.278327] task:vdev_autotrim state:D stack:0 pid:1730 tgid:1730 ppid:2 flags:0x00004000
[ 1599.278335] Call Trace:
[ 1599.278338]
[ 1599.278341] __schedule+0x27c/0x6b0
[ 1599.278350] schedule+0x33/0x110
[ 1599.278355] cv_wait_common+0x102/0x140 [spl]
[ 1599.278387] ? __pfx_autoremove_wake_function+0x10/0x10
[ 1599.278394] __cv_wait+0x15/0x30 [spl]
[ 1599.278425] vdev_autotrim_wait_kick+0x4d/0xb0 [zfs]
[ 1599.279467] vdev_autotrim_thread+0x44e/0x7b0 [zfs]
[ 1599.280510] ? __pfx_vdev_autotrim_thread+0x10/0x10 [zfs]
[ 1599.281544] ? __pfx_thread_generic_wrapper+0x10/0x10 [spl]
[ 1599.281595] thread_generic_wrapper+0x5f/0x70 [spl]
[ 1599.281632] kthread+0xf2/0x120
[ 1599.281640] ? __pfx_kthread+0x10/0x10
[ 1599.281645] ret_from_fork+0x47/0x70
[ 1599.281650] ? __pfx_kthread+0x10/0x10
[ 1599.281655] ret_from_fork_asm+0x1b/0x30
[ 1599.281663]
[ 1599.281707] INFO: task zvol:4168 blocked for more than 123 seconds.
[ 1599.281715] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.281719] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.281722] task:zvol state:D stack:0 pid:4168 tgid:4168 ppid:2 flags:0x00004000
[ 1599.281731] Call Trace:
[ 1599.281733]
[ 1599.281737] __schedule+0x27c/0x6b0
[ 1599.281746] schedule+0x33/0x110
[ 1599.281750] schedule_preempt_disabled+0x15/0x30
[ 1599.281755] rwsem_down_read_slowpath+0x284/0x4d0
[ 1599.281763] down_read+0x48/0xd0
[ 1599.281770] dmu_tx_check_ioerr+0x3f/0x100 [zfs]
[ 1599.282813] dmu_tx_count_write+0xe3/0x1d0 [zfs]
[ 1599.283850] dmu_tx_hold_write_by_dnode+0x3a/0x60 [zfs]
[ 1599.284880] zvol_write+0x225/0x400 [zfs]
[ 1599.285892] zvol_write_task+0x12/0x30 [zfs]
[ 1599.286921] taskq_thread+0x1f6/0x3c0 [spl]
[ 1599.286976] ? __pfx_default_wake_function+0x10/0x10
[ 1599.286988] ? __pfx_zvol_write_task+0x10/0x10 [zfs]
[ 1599.287997] ? __pfx_taskq_thread+0x10/0x10 [spl]
[ 1599.288047] kthread+0xf2/0x120
[ 1599.288055] ? __pfx_kthread+0x10/0x10
[ 1599.288060] ret_from_fork+0x47/0x70
[ 1599.288066] ? __pfx_kthread+0x10/0x10
[ 1599.288071] ret_from_fork_asm+0x1b/0x30
[ 1599.288079]
[ 1599.288131] INFO: task zvol:22411 blocked for more than 123 seconds.
[ 1599.288139] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.288143] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.288146] task:zvol state:D stack:0 pid:22411 tgid:22411 ppid:2 flags:0x00004000
[ 1599.288154] Call Trace:
[ 1599.288157]
[ 1599.288160] __schedule+0x27c/0x6b0
[ 1599.288168] schedule+0x33/0x110
[ 1599.288173] schedule_preempt_disabled+0x15/0x30
[ 1599.288178] rwsem_down_write_slowpath+0x27e/0x550
[ 1599.288186] down_write+0x5c/0x80
[ 1599.288193] dnode_new_blkid+0xf7/0x180 [zfs]
[ 1599.289207] dbuf_dirty+0x54f/0x730 [zfs]
[ 1599.290202] dmu_buf_will_dirty_impl+0xd2/0x1b0 [zfs]
[ 1599.291195] dmu_buf_will_dirty+0x16/0x30 [zfs]
[ 1599.292202] dmu_write_uio_dnode+0x92/0x170 [zfs]
[ 1599.293198] zvol_write+0x250/0x400 [zfs]
[ 1599.294240] zvol_write_task+0x12/0x30 [zfs]
[ 1599.295260] taskq_thread+0x1f6/0x3c0 [spl]
[ 1599.295311] ? __pfx_default_wake_function+0x10/0x10
[ 1599.295323] ? __pfx_zvol_write_task+0x10/0x10 [zfs]
[ 1599.296336] ? __pfx_taskq_thread+0x10/0x10 [spl]
[ 1599.296385] kthread+0xf2/0x120
[ 1599.296393] ? __pfx_kthread+0x10/0x10
[ 1599.296398] ret_from_fork+0x47/0x70
[ 1599.296405] ? __pfx_kthread+0x10/0x10
[ 1599.296409] ret_from_fork_asm+0x1b/0x30
[ 1599.296417]
[ 1599.296422] INFO: task zvol:22412 blocked for more than 123 seconds.
[ 1599.296428] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.296433] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.296436] task:zvol state:D stack:0 pid:22412 tgid:22412 ppid:2 flags:0x00004000
[ 1599.296443] Call Trace:
[ 1599.296446]
[ 1599.296449] __schedule+0x27c/0x6b0
[ 1599.296458] schedule+0x33/0x110
[ 1599.296462] schedule_preempt_disabled+0x15/0x30
[ 1599.296467] rwsem_down_read_slowpath+0x284/0x4d0
[ 1599.296475] down_read+0x48/0xd0
[ 1599.296481] dmu_tx_check_ioerr+0x3f/0x100 [zfs]
[ 1599.297512] dmu_tx_count_write+0xe3/0x1d0 [zfs]
[ 1599.298535] dmu_tx_hold_write_by_dnode+0x3a/0x60 [zfs]
[ 1599.299563] zvol_write+0x225/0x400 [zfs]
[ 1599.300595] zvol_write_task+0x12/0x30 [zfs]
[ 1599.301619] taskq_thread+0x1f6/0x3c0 [spl]
[ 1599.301670] ? __pfx_default_wake_function+0x10/0x10
[ 1599.301682] ? __pfx_zvol_write_task+0x10/0x10 [zfs]
[ 1599.302702] ? __pfx_taskq_thread+0x10/0x10 [spl]
[ 1599.302753] kthread+0xf2/0x120
[ 1599.302761] ? __pfx_kthread+0x10/0x10
[ 1599.302766] ret_from_fork+0x47/0x70
[ 1599.302772] ? __pfx_kthread+0x10/0x10
[ 1599.302777] ret_from_fork_asm+0x1b/0x30
[ 1599.302785]
[ 1599.302789] INFO: task zvol:22413 blocked for more than 123 seconds.
[ 1599.302796] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.302800] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.302804] task:zvol state:D stack:0 pid:22413 tgid:22413 ppid:2 flags:0x00004000
[ 1599.302811] Call Trace:
[ 1599.302814]
[ 1599.302817] __schedule+0x27c/0x6b0
[ 1599.302825] schedule+0x33/0x110
[ 1599.302829] schedule_preempt_disabled+0x15/0x30
[ 1599.302834] rwsem_down_read_slowpath+0x284/0x4d0
[ 1599.302842] down_read+0x48/0xd0
[ 1599.302847] dmu_tx_check_ioerr+0x3f/0x100 [zfs]
[ 1599.303875] dmu_tx_count_write+0xe3/0x1d0 [zfs]
[ 1599.304881] dmu_tx_hold_write_by_dnode+0x3a/0x60 [zfs]
[ 1599.305969] zvol_write+0x225/0x400 [zfs]
[ 1599.306987] zvol_write_task+0x12/0x30 [zfs]
[ 1599.308009] taskq_thread+0x1f6/0x3c0 [spl]
[ 1599.308058] ? __pfx_default_wake_function+0x10/0x10
[ 1599.308070] ? __pfx_zvol_write_task+0x10/0x10 [zfs]
[ 1599.309110] ? __pfx_taskq_thread+0x10/0x10 [spl]
[ 1599.309160] kthread+0xf2/0x120
[ 1599.309170] ? __pfx_kthread+0x10/0x10
[ 1599.309175] ret_from_fork+0x47/0x70
[ 1599.309180] ? __pfx_kthread+0x10/0x10
[ 1599.309185] ret_from_fork_asm+0x1b/0x30
[ 1599.309193]
[ 1599.309196] INFO: task zvol:22414 blocked for more than 123 seconds.
[ 1599.309205] Tainted: P OE 6.8.0-39-generic #39-Ubuntu
[ 1599.309209] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1599.309212] task:zvol state:D stack:0 pid:22414 tgid:22414 ppid:2 flags:0x00004000
[ 1599.309220] Call Trace:
[ 1599.309223]
[ 1599.309226] __schedule+0x27c/0x6b0
[ 1599.309234] schedule+0x33/0x110
[ 1599.309239] schedule_preempt_disabled+0x15/0x30
[ 1599.309244] rwsem_down_read_slowpath+0x284/0x4d0
[ 1599.309252] ? __list_add+0x17/0x40 [zfs]
[ 1599.310294] down_read+0x48/0xd0
[ 1599.310306] dmu_buf_hold_array_by_dnode+0x4f/0x570 [zfs]
[ 1599.311341] ? dsl_dir_tempreserve_space+0x10f/0x160 [zfs]
[ 1599.312359] dmu_write_uio_dnode+0x5c/0x170 [zfs]
[ 1599.313360] zvol_write+0x250/0x400 [zfs]
[ 1599.314393] zvol_write_task+0x12/0x30 [zfs]
[ 1599.315434] taskq_thread+0x1f6/0x3c0 [spl]
[ 1599.315487] ? __pfx_default_wake_function+0x10/0x10
[ 1599.315498] ? __pfx_zvol_write_task+0x10/0x10 [zfs]
[ 1599.316518] ? __pfx_taskq_thread+0x10/0x10 [spl]
[ 1599.316568] kthread+0xf2/0x120
[ 1599.316577] ? __pfx_kthread+0x10/0x10
[ 1599.316582] ret_from_fork+0x47/0x70
[ 1599.316587] ? __pfx_kthread+0x10/0x10
[ 1599.316592] ret_from_fork_asm+0x1b/0x30
[ 1599.316601]
~~~
I have the feeling I simply need to do something to /dev/zd16 (aka: /dev/zvol/rpool/swap).
(the 8 GB swap is a 8 GB LUKS partition - not zfs).
~~~
$ zfs get all rpool/swap
NAME PROPERTY VALUE SOURCE
rpool/swap type volume -
rpool/swap creation Fri Aug 2 0:39 2024 -
rpool/swap used 30.5G -
rpool/swap available 176G -
rpool/swap referenced 362M -
rpool/swap compressratio 1.00x -
rpool/swap reservation none default
rpool/swap volsize 30G local
rpool/swap volblocksize 16K default
rpool/swap checksum on default
rpool/swap compression lz4 inherited from rpool
rpool/swap readonly off default
rpool/swap createtxg 1260475 -
rpool/swap copies 1 default
rpool/swap refreservation 30.5G local
rpool/swap guid 13973446581616721601 -
rpool/swap primarycache all default
rpool/swap secondarycache all default
rpool/swap usedbysnapshots 0B -
rpool/swap usedbydataset 362M -
rpool/swap usedbychildren 0B -
rpool/swap usedbyrefreservation 30.1G -
rpool/swap logbias latency default
rpool/swap objsetid 84726 -
rpool/swap dedup off default
rpool/swap mlslabel none default
rpool/swap sync standard inherited from rpool
rpool/swap refcompressratio 1.00x -
rpool/swap written 362M -
rpool/swap logicalused 364M -
rpool/swap logicalreferenced 364M -
rpool/swap volmode default default
rpool/swap snapshot_limit none default
rpool/swap snapshot_count none default
rpool/swap snapdev hidden default
rpool/swap context none default
rpool/swap fscontext none default
rpool/swap defcontext none default
rpool/swap rootcontext none default
rpool/swap redundant_metadata all default
rpool/swap encryption aes-256-gcm -
rpool/swap keylocation none default
rpool/swap keyformat passphrase -
rpool/swap pbkdf2iters 100000 -
rpool/swap encryptionroot rpool -
rpool/swap keystatus available -
~~~
Ole Tange
(37348 rep)
Aug 4, 2024, 04:12 PM
• Last activity: Aug 1, 2025, 05:10 AM
7
votes
1
answers
2083
views
Btrfs/ZFS Network Replication
Is it possible to replicate a ZFS or Btrfs raid volume in real-time (or as close to as possible, network specs aside) over a network? ZFS and Btrfs are ideal because of their CoW properties. I'm thinking something similar to DRBD, but DRBD won't work because it requires a single-block device, and we...
Is it possible to replicate a ZFS or Btrfs raid volume in real-time (or as close to as possible, network specs aside) over a network?
ZFS and Btrfs are ideal because of their CoW properties.
I'm thinking something similar to DRBD, but DRBD won't work because it requires a single-block device, and we're ruling out the option of exporting each disk as a DRBD device because that would get messy.
I don't want to use send/receive because they would be too slow, even if scripted.
Ideally, I'd like something relatively simple to avoid unnecessary complexity.
DevinM
(171 rep)
Nov 10, 2015, 12:23 AM
• Last activity: Jul 30, 2025, 04:05 PM
1
votes
1
answers
61
views
How do I get FreeBSD multiboot to work with zfs
I have one hard drive that I want to split into two separate `FreeBSD` OS installs and I'm running into a wall trying to figure out how to get it working. I followed [Installing FreeBSD Root on ZFS using GPT][1] in order to get a single OS up and running. There are no problems and everything works g...
I have one hard drive that I want to split into two separate
FreeBSD
OS installs and I'm running into a wall trying to figure out how to get it working.
I followed Installing FreeBSD Root on ZFS using GPT in order to get a single OS up and running. There are no problems and everything works great. However, I do not know how to modify this in order to get a second OS up and running.
I repeat from steps 2 onward to create the second partition and second zpool
.
Once I restart the FreeBSD
boot menu does not show any boot options (i.e. no option for Boot Environment
in option 8).
If I do a zpool import
it will show my second zpool
that is a part of the other partition.
I can even mount it using zpool import -R /mnt zroot2
and I can see the files.
This is the point I'm quite lost. I can't figure out what I did to do. I looked into efibootmgr
and bectl
, but I'm not sure either is the right option. bectl
seems to create Boot Environments
from snapshots and not from other partitions. efibootmgr
seems to need the file system mounted.
Essentially my current progress is:
- nda0p1 efi
- nda0p2 freebsd-swap
- nda0p3 freebsd-zfs (OS 1)
- nda0p4 freebsd-zfs (OS 2)
How do I get the FreeBSD boot manager to recognize the second OS?
edit1: I tried installing twice through the UI and it did put another entry into the efibootmgr
, but both options boot the newest install even though they point to separate files (\EFI\BOOT\BOOTX64.efi
and \EFI\freebsd\loader.efi
). The original install doesn't boot.
edit2: I think the reason they both boot the same instance is they're both the same UI. You copy the efi
into to the BOOTX64.efi
location in the steps from above. From what I understand this efi
automatically finds the FreeBSD instance.
How does one create an EFI that points directly to an instance?
quickblueblur
(199 rep)
Jul 28, 2025, 03:30 PM
• Last activity: Jul 29, 2025, 05:22 PM
1
votes
0
answers
9
views
ZFS: `class=dio_verify_rd` events logged in syslog
I am using ZFS on an Ubuntu 24.04 host and have raw VM images stored on a dataset. QEMU/KVM uses them via ``` ``` I've had this running like this for several years, and in April (~3 Months ago) I upgraded Ubuntu, and with that ZFS, which is now at this version: ``` modinfo zfs | grep version version...
I am using ZFS on an Ubuntu 24.04 host and have raw VM images stored on a dataset. QEMU/KVM uses them via
I've had this running like this for several years, and in April (~3 Months ago) I upgraded Ubuntu, and with that ZFS, which is now at this version:
modinfo zfs | grep version
version: 2.3.1-1ubuntu1
srcversion: 3CBA3F639287005F1F50BB3
vermagic: 6.14.0-24-generic SMP preempt mod_unload modversions
zfs --version
zfs-2.2.2-0ubuntu9.3
zfs-kmod-2.3.1-1ubuntu1
Why the mismatch between userland (2.2.2
) and kernel module (2.3.1
) versions?
Direct i/o is enabled, if this is the correct query:
cat /sys/module/zfs/parameters/zfs_dio_enabled 2>/dev/null || echo "Parameter not found"
1
Since today I am getting tons of entries in syslog of the following type
2025-07-24T23:01:28.720398+02:00 host zed: eid=1131 class=dio_verify_rd pool='tank' size=131072 offset=623438361600 priority=0 err=0 flags=0x100280080 bookmark=3737:2:0:254570
2025-07-24T23:01:28.720526+02:00 host zed: eid=1132 class=dio_verify_rd pool='tank' size=131072 offset=623445984256 priority=0 err=0 flags=0x100280080 bookmark=3737:2:0:238679
2025-07-24T23:01:30.287882+02:00 host zed: eid=1133 class=dio_verify_rd pool='tank' size=131072 offset=2604696819712 priority=0 err=0 flags=0x100280080 bookmark=3737:2:0:187028
zpool status
shows no issues:
zpool status -v tank
pool: tank
state: ONLINE
scan: scrub repaired 0B in 00:30:08 with 0 errors on Tue Jul 15 04:30:09 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-WD_BLACK_SN850X_4000GB_2xxxxxxxxxxx ONLINE 0 0 0
nvme-WD_BLACK_SN850X_4000GB_2xxxxxxxxxxx ONLINE 0 0 0
errors: No known data errors
The host hosts 7 VMs, 5 are Ubuntu 22.04/24.04, one is Windows 7 and the one causing the logged events is Windows 10. Only the Windows 10 VM shows this symptom.
The VM booted just fine and was usable without noticeable issues (but I don't use it much).
I have now changed the QEMU/KVM disk config to cache=writeback
and the log entries ceased to appear.
Are there known issues with this new ZFS feature?
Should I disable it globally at ZFS level? If so, how would I do that?
Daniel F
(937 rep)
Jul 25, 2025, 05:58 PM
• Last activity: Jul 25, 2025, 06:11 PM
1
votes
1
answers
1929
views
Mounting nested ZFS filesystems exported via NFS
I have a linux (ubuntu) server with a zfs pool containing nested fileystem. E.g.: zfs_pool/root_fs/fs1 zfs_pool/root_fs/fs2 zfs_pool/root_fs/fs3 I have enabled NFS sharing on the root filesystem (via zfs, not by editing `/etc/exports`). Nested filesystems inherit this property. NAME PROPERTY VALUE S...
I have a linux (ubuntu) server with a zfs pool containing nested fileystem.
E.g.:
zfs_pool/root_fs/fs1
zfs_pool/root_fs/fs2
zfs_pool/root_fs/fs3
I have enabled NFS sharing on the root filesystem (via zfs, not by editing
/etc/exports
). Nested filesystems inherit this property.
NAME PROPERTY VALUE SOURCE
zfs_pool/root_fs sharenfs rw=192.168.1.0/24,root_squash,async local
NAME PROPERTY VALUE SOURCE
zfs_pool/root_fs/fs1 sharenfs rw=192.168.1.0/24,root_squash,async inherited from zfs_pool/root_fs
On the client machines (linux, mostly ubuntu), the only filesystem I explicitly mount is the root filesystem.
mount -t nfs zfsserver:/zfs_pool/root_fs /root_fs_mountpoint
Nested filesystems are mounted automatically when they are accessed. I didn't need to configure anything to make this work.
This is great, but I'd like to know who is providing this feature.
Is it ZFS? Is it NFS? Is it something else on the client side (something like autofs, which isn't even installed).
I'd like to change the timeout after which nested filesystems are unmounted, but I don't even know which configuration to edit and which documentation to read.
lgpasquale
(291 rep)
Oct 15, 2018, 09:22 AM
• Last activity: Jul 21, 2025, 04:08 PM
0
votes
1
answers
43
views
ZFS (ZoL). New VDEVs with larger disks: OK or not?
Can I sanity check something, please? - I have a single ZFS pool, containing multiple RAIDZ2 VDEVs, each of six disks - One of the SAS JBODs has empty bays in it - I'd like to expand the pool[1] by filling all the empty bays with disks - …and using the new disks to create new, additional (6-disk RAI...
Can I sanity check something, please?
- I have a single ZFS pool, containing multiple RAIDZ2 VDEVs, each of six disks
- One of the SAS JBODs has empty bays in it
- I'd like to expand the pool[1] by filling all the empty bays with disks
- …and using the new disks to create new, additional (6-disk RAIDZ2) VDEVs i.e. like the existing VDEVs
_However:_
The disks that are currently in the JBOD are **12TB**, and I'd like to fill the unused bays with **18TB** disks. I can't see any reason why this won't work, or why it might be a problem[2] .
Am I missing anything?
(I've read various things online, and I've been through the documentation; I *think* what I'm proposing is OK. But I'd be grateful for a second - or third, or fourth - opinion. I'm using ZFS on Linux zfs-0.8.3-1ubuntu12.18 on Ubuntu 20.04).
Thanks for reading.
---
spoiler_alert
(3 rep)
Jul 18, 2025, 11:57 AM
• Last activity: Jul 18, 2025, 01:43 PM
8
votes
1
answers
491
views
Is it worth the hassle and risk of reformatting an NVME to use 4K blocks on a ZFS pool created with ashift=12?
I recently upgraded the NVME drives on my workstation machine, from a pair of Samsung EVO 970 512GB drives to a pair of of Kingston Fury 2TB drives. All went well, and I even converted the machine from old BIOS boot to UEFI boot. No problem. However, I just noticed that the NVME drives are formatted...
I recently upgraded the NVME drives on my workstation machine, from a pair of Samsung EVO 970 512GB drives to a pair of of Kingston Fury 2TB drives. All went well, and I even converted the machine from old BIOS boot to UEFI boot. No problem.
However, I just noticed that the NVME drives are formatted with 512 byte blocks rather than 4KiB blocks. I mistakenly assumed that they'd be 4K and didn't check.
# nvme list
Node Generic SN Model Namespace Usage Format FW Rev
------------ ---------- ---- ------------------- ---------- ------------- --------- --------
/dev/nvme0n1 /dev/ng0n1 XXXX KINGSTON SFYRD2000G 0x1 2.00TB/2.00TB 512B + 0B EIFK31.7
/dev/nvme1n1 /dev/ng1n1 XXXX KINGSTON SFYRD2000G 0x1 2.00TB/2.00TB 512B + 0B EIFK31.7
# nvme id-ns -H /dev/nvme0n1 | grep Data.Size
LBA Format 0 : Metadata Size: 0 bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format 1 : Metadata Size: 0 bytes - Data Size: 4096 bytes - Relative Performance: 0x1 Better
I'm using partitions on these drives for GRUB BIOS boot (p1), ESP (p2), an mdadm RAID-1 ext4 /boot filesystem (p3) with lots of space for kernels & ISO images, swap space (p4), L2ARC (p5) and ZIL (p6) for a HDD zfs pool, and the ZFS rootfs (p7).
The BIOS boot partition is obsolete now, since I've switched to UEFI but it resides in otherwise unused space before sector 2048 so isn't important.
They're both partitioned identically.
# gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.10
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 3907029168 sectors, 1.8 TiB
Model: KINGSTON SFYRD2000G
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 9E7187C9-3ED2-46EF-A695-E72489F2BEC3
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 8-sector boundaries
Total free space is 143 sectors (71.5 KiB)
Number Start (sector) End (sector) Size Code Name
1 34 2047 1007.0 KiB EF02 BIOS boot partition
2 2048 1050623 512.0 MiB EF00 EFI system partition
3 1050624 8390655 3.5 GiB FD00
4 8390656 142608383 64.0 GiB 8200 Linux swap
5 142608384 276826111 64.0 GiB BF08 Solaris Reserved 2
6 276826112 285214719 4.0 GiB BF09 Solaris Reserved 3
7 285214720 3907028991 1.7 TiB BF00 Solaris root
Anyway, I created the ZFS pool with ashift=12
for 4KiB block sizes, so it's always going to be reading and writing in multiples of 4K at a time.
What I want to know is if there will be a noticeable performance difference if I reformat the NVME drives to use 4K sectors?
I know (roughly) how to do that using the nvme
command while booted from a rescue image, but given the hassle involved and the amount of downtime, and the risk of losing data if I make a mistake or if disaster strikes during one of the periods when the ZFS pool is in a degraded state, I only want to do it if there is a significant benefit...significant, to me, meaning at least a 5 or 10% improvement, not just 1 or 2%.
(I have backups of the root pool - multiple nightly backups in multiple locations - but I'd prefer to avoid restoring from backup)
I don't care about performance for the ESP or /boot partitions. Swap & L2ARC might benefit. The ZIL rarely gets used and probably won't be noticable. The main concern is performance of the zpool partition itself.
cas
(81932 rep)
Jul 15, 2025, 02:25 PM
• Last activity: Jul 15, 2025, 05:23 PM
2
votes
0
answers
39
views
Mapping file UIDs when mounting ZFS on linux
I have a ZFS filesystem on an external drive, but the user ID of my accounts don't match between computers. Is there a way to map the UID when mounting (without actually changing it on the filesystem) so that files which are owned by, for example, uid 501, appear as owned by uid 1000? Ideally there'...
I have a ZFS filesystem on an external drive, but the user ID of my accounts don't match between computers. Is there a way to map the UID when mounting (without actually changing it on the filesystem) so that files which are owned by, for example, uid 501, appear as owned by uid 1000? Ideally there's a method that can do this directly, without resorting to bind mounts or overlay filesystems.
I've read that linux supports this functionality natively since 5.12 , and that it even should be possible to map IDs in a way that is transparent to the filesystem driver itself, but I can't figure out how to achieve this on the command line.
I'm running Fedora 42 with linux kernel version 6.15.4-200 and zfs release 2.3.3-1 from the zfsonlinux.org fedora repository.
---
The ID-mapped mount functionality now seems to exist in the mount(8) command as the option **--map-users** _id-mount_:_id-host_:_id-range_. Using an OpenZSH internal option to ignore errors when using the regular "mount" command (as opposed to
zfs mount
), I've tried:
sudo mount -tzfs -o zfsutil --map-users 1000:501:1 mypool/myfiles /mnt/mypool/myfiles
But this did not seem to map anything.
philippe
(221 rep)
Jul 6, 2025, 11:18 AM
• Last activity: Jul 6, 2025, 05:39 PM
1
votes
2
answers
296
views
ZFS which uses SAN (ISCSI/FC/NVMEoe...) as VDEV
**Is it reasonable to use ISCSI/FC as a block device and use it as a vdev for ZFS?** Reasoning: There is only so much disks (even with external enclosures) that you can put in one host. We are looking into a solution where we would build a head unit (server acting as iSCSI initiator) and disk hosts...
**Is it reasonable to use ISCSI/FC as a block device and use it as a vdev for ZFS?**
Reasoning:
There is only so much disks (even with external enclosures) that you can put in one host. We are looking into a solution where we would build a head unit (server acting as iSCSI initiator) and disk hosts (servers acting as iSCSI targets). Disk hosts would expose their disks (either each one individually or RAID-ed), head unit would connect over ethernet to those disk units over iSCSI and use those exported disks as VDEVs for a giant ZFS. Remember, Z in ZFS means Zettabyte! Meaningless if you cannot connect a zettabyte worth of disks somehow :)
assassinatorr
(111 rep)
Jan 9, 2024, 06:47 PM
• Last activity: Jul 2, 2025, 04:00 PM
2
votes
1
answers
83
views
New disk array on linux
My son and I are about to head off on and adventure converting 4 disks into one array. Let me give you some background on what the layout looks like today. We are running gentoo linux and have 4 10TB disks. Two of them currently have data on them, but not in an array. Two disks are unused at this ti...
My son and I are about to head off on and adventure converting 4 disks into one array.
Let me give you some background on what the layout looks like today. We are running gentoo linux and have 4 10TB disks.
Two of them currently have data on them, but not in an array. Two disks are unused at this time.
What I would like to do is:
* create an array, either software raid (mdadm) or ZFS pool
* mount that new array on an alternate mount point (i.e. /mnt/blah)
* copy the data from the other disks on to this new array,
* then finally add the two older disks into that new array.
I'm not worried about fault tolerance but space and performance would be great. What is the best pathway to reach this goal?
Bejiita78
Jun 30, 2025, 05:27 PM
• Last activity: Jul 2, 2025, 11:07 AM
2
votes
1
answers
3135
views
How can I change a ZFS mountpoint from legacy to be handled by ZFS
Currently using ZFS on ArchLinux. I have two datasets that I originally setup with legacy mountpoints: # zfs get mountpoint tank/data/home NAME PROPERTY VALUE SOURCE tank/data/home mountpoint legacy local # zfs get mountpoint tank/data/home/kevdog NAME PROPERTY VALUE SOURCE tank/data/home/kevdog mou...
Currently using ZFS on ArchLinux. I have two datasets that I originally setup with legacy mountpoints:
# zfs get mountpoint tank/data/home
NAME PROPERTY VALUE SOURCE
tank/data/home mountpoint legacy local
# zfs get mountpoint tank/data/home/kevdog
NAME PROPERTY VALUE SOURCE
tank/data/home/kevdog mountpoint legacy local
I have corresponding entries within /etc/fstab for these mountpoints
I'd like to change these mounts to be handled by zfs rather than mount
I logged in as root, and then did the following:
umount /home/kevdog
umount /home
zfs set mountpoint=/home tank/data/home
zfs set mountpoint=/home/kevdog tank/data/home/kevdog
I went ahead and commented out the corresponding fstab entries for these mounts
At this point I rebooted system however ran into a problem,
the dataset tank/data/home/kevdog was mounted at /home/kevdog, however the directory was totally empty. After undoing what I just described above (setting legacy for management of these datasets) the /home/kevdog directory was no longer empty.
Just curious to know why the process didn't work. Did I have to export/import pool again to make this work? Did I forget to do something else?
KevDog
(51 rep)
Feb 1, 2020, 10:16 PM
• Last activity: Jun 26, 2025, 03:04 AM
0
votes
0
answers
116
views
CachyOS (Arch linux based) ZFS root with GRUB on legacy BIOS
I'm trying to install Cachyos on zfs raidz with 4 hdd 6Tb each. Hardware is HP Microgen 8. The system boots and goes to restart after the message "Loading initial ramdisk". Any idea what I miss? **Each command from the list below completed its work without errors.** Prepare disks: for disk in /dev/d...
I'm trying to install Cachyos on zfs raidz with 4 hdd 6Tb each.
Hardware is HP Microgen 8.
The system boots and goes to restart after the message "Loading initial ramdisk".
Any idea what I miss?
**Each command from the list below completed its work without errors.**
Prepare disks:
for disk in /dev/disk/by-id/ata-TOSHIBA1 \
/dev/disk/by-id/ata-WDC1 \
/dev/disk/by-id/ata-WDC2 \
/dev/disk/by-id/ata-WDC3; do
parted --script $disk mklabel gpt
parted --script $disk mkpart primary 1MiB 2MiB # grub Boot partition
parted --script $disk set 1 bios_grub on
parted --script $disk mkpart primary 2MiB 8GiB # Swap (8GB)
parted --script $disk mkpart primary 8GiB 100% # ZFS
parted --script $disk name 2 swap
parted --script $disk name 3 zfs
done
Create pool:
zpool create -f \
-o compatibility=grub2 \
-O compression=lz4 \
-O acltype=posixacl \
-O xattr=sa \
-O relatime=on \
-O normalization=formD \
-O mountpoint=none \
-O canmount=off \
-o feature@async_destroy=enabled \
-o feature@device_rebuild=enabled \
-o feature@resilver_defer=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@spacemap_v2=enabled \
-o feature@zpool_checkpoint=enabled \
-o ashift=12 \
zfs_pool raidz1 \
/dev/disk/by-id/ata-*-part3
ZFS Datasets:
zfs create -o mountpoint=none zfs_pool/ROOT
zfs create -o mountpoint=/ -o canmount=noauto zfs_pool/ROOT/cachyos
zfs create -o mountpoint=/home -o canmount=on zfs_pool/HOME
zfs create -o mountpoint=/var -o canmount=on zfs_pool/VAR
Mount ZFS:
zpool export zfs_pool
zpool import -R /mnt zfs_pool
zfs mount -a
Generate zpool.cache
zpool set cachefile=/etc/zfs/zpool.cache zfs_pool
Install system:
pacstrap /mnt base base-devel linux-cachyos-lts linux-cachyos-lts-headers zfs grub mdadm nano mkinitcpio sudo linux-firmware networkmanager dhcpcd
Copy zpool.cache
cp /etc/zfs/zpool.cache /mnt/etc/zfs/
Add zfs to HOOKS before 'filesystems' in /mnt/etc/mkinitcpio.conf and COMPRESSION=gzip
Chroot to new system:
arch-chroot /mnt
Recreate initrams:
mkinitcpio -P
Update grub /etc/default/grub:
GRUB_CMDLINE_LINUX="root=ZFS=zfs_pool/ROOT/cachyos boot=zfs"
GRUB_PRELOAD_MODULES="part_gpt part_msdos zfs"
Install grub to HDD:
grub-install --target=i386-pc /dev/disk/by-id/ata-TOSHIBA1
grub-install --target=i386-pc /dev/disk/by-id/ata-WDC1
grub-install --target=i386-pc /dev/disk/by-id/ata-WDC2
grub-install --target=i386-pc /dev/disk/by-id/ata-WDC3
Generate new Grub cfg:
grub-mkconfig -o /boot/grub/grub.cfg
Finish:
exit
unmount -R /mnt
zpool export zfs_pool
reboot
Get reboot after Grub tries to boot CachyOS, shows messages:
Loading Linux linux-cachyos-lts ...
Loading initial ramdisk ...
boot/grub/grub.cfg looks:
...
menuentry 'CachyOS Linux' --class cachyos --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-2fa7a1906badbbdd' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod part_gpt
insmod part_gpt
insmod part_gpt
insmod zfs
set root='hd2,gpt3'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd2,gpt3 --hint-efi=hd2,gpt3 --hint-baremetal=ahci2,gpt3 --hint-bios=hd1,gpt3 --hint-efi=hd1,gpt3 --hint-baremetal=ahci1,gpt3 --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3 --hint-bios=hd3,gpt3 --hint-efi=hd3,gpt3 --hint-baremetal=ahci3,gpt3 2fa7a1906badbbdd
else
search --no-floppy --fs-uuid --set=root 2fa7a1906badbbdd
fi
echo 'Loading Linux linux-cachyos-lts ...'
linux /ROOT/cachyos@/boot/vmlinuz-linux-cachyos-lts root=ZFS=zfs_pool/ROOT/cachyos rw root=ZFS=zfs_pool/ROOT/cachyos boot=zfs loglevel=7
echo 'Loading initial ramdisk ...'
initrd /ROOT/cachyos@/boot/initramfs-linux-cachyos-lts.img
}
...
fdisk -l for each disk looks:
fdisk -l /dev/disk/by-id/ata-TOSHIBA1
Disk /dev/disk/by-id/ata-TOSHIBA1: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: TOSHIBA MG04ACA6
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9A...
Device Start End Sectors Size Type
/dev/disk/by-id/ata-TOSHIBA1-part1 2048 4095 2048 1M BIOS boot
/dev/disk/by-id/ata-TOSHIBA1-part2 4096 16777215 16773120 8G Linux filesystem
/dev/disk/by-id/ata-TOSHIBA1-part3 16777216 11721043967 11704266752 5.5T Linux filesystem
What am I missing?
KonstantinKuklin
(205 rep)
Jun 13, 2025, 12:22 AM
• Last activity: Jun 13, 2025, 05:25 PM
1
votes
1
answers
1244
views
Unable to mount encrypted ZFS filesystem after reboot
```none Key load error: Failed to open key material file: Input/Output Error. Command: `mount -o zfsutil -t zfs rpool/ROOT/ubuntu_uy913 /root/`. Message: `zfs_mount_at() failed: encryption key not loaded`. `zfs_mount_at() failed: encryption key not loaded`. Mounting `rpool/ROOT/ubuntu_uy913` on `/ro...
Key load error: Failed to open key material file: Input/Output Error.
Command: mount -o zfsutil -t zfs rpool/ROOT/ubuntu_uy913 /root/
.
Message: zfs_mount_at() failed: encryption key not loaded
.
zfs_mount_at() failed: encryption key not loaded
.
Mounting rpool/ROOT/ubuntu_uy913
on /root/
failed: Permission denied.
Error: 2.
Failed to mount rpool/ROOT/ubuntu_uy913
on /root/
.
Please manually mount the filesystem and exit.
BusyBox v1.30.1 (Ubuntu 1:1.30.1-7ubuntu2) built-in shell (ash).
Enter 'help' for a list of built-in commands.
(initramfs):
Hello everyone, after turning off my machine, when I tried to turn it back on and entered my passphrase, this is what I encountered. Now I'm quite upset because I'm not sure what the issues are.
MoonMiddays
(41 rep)
Jul 13, 2024, 05:55 AM
• Last activity: Jun 8, 2025, 12:22 PM
0
votes
1
answers
344
views
ZFS error for load-key
I've tried everything I know until I'm exhausted. Please, if you can help me, I need to access my project stored on my hard drive. The command I used: ``` sudo zfs load-key rpool/ROOT/ubuntu_uy913x ``` And the error message I received: ``` Key load error: Keys must be loaded for encryption root of '...
I've tried everything I know until I'm exhausted. Please, if you can help me, I need to access my project stored on my hard drive.
The command I used:
sudo zfs load-key rpool/ROOT/ubuntu_uy913x
And the error message I received:
Key load error: Keys must be loaded for encryption root of 'rpool/ROOT/ubuntu_uy913x' (rpool).
MoonMiddays
(41 rep)
Jul 13, 2024, 11:13 AM
• Last activity: Jun 7, 2025, 02:36 PM
0
votes
0
answers
28
views
External USB drive with zfs pool won't go to sleep (most of the time)
I have an external hard disk with a ZFS pool on it. The data is accessed very infrequently so I would like to drive to go to sleep after some inactivity, which I set to 10min: hdparm -S 120 /dev/disk/by-id/usb-Seagate_Expansion_Desk_NAABDT6W-0\:0 With a cron script I check the status of the drive ov...
I have an external hard disk with a ZFS pool on it. The data is accessed very infrequently so I would like to drive to go to sleep after some inactivity, which I set to 10min:
hdparm -S 120 /dev/disk/by-id/usb-Seagate_Expansion_Desk_NAABDT6W-0\:0
With a cron script I check the status of the drive over time:
hdparm -C /dev/disk/by-id/usb-Seagate_Expansion_Desk_NAABDT6W-0\:0
This works sometimes but most of the time, the drive remains in "active/idle" all the time, even if no access occurs.
The ZFS pool is not performing scrubs. There are no zvols. All datasets (some of them encrypted) are mounted to under /zpseagate8tb/ (and /zpseagate8tb_bind/ for use in LXC containers).
I confirmed no file descriptors to any of the datasets are open:
# lsof -w | grep zpseagate8tb
#
What else could be preventing the drive from going to sleep?
divB
(218 rep)
May 29, 2025, 05:36 PM
2
votes
1
answers
122
views
Which zfs permission(s) is/are needed to change readonly status?
zfs get readonly tank/mydata NAME PROPERTY VALUE SOURCE tank/mydata readonly off local zfs set readonly=on tank/mydata echo $? 0 zfs get readonly tank/mydata NAME PROPERTY VALUE SOURCE tank/mydata readonly on local sudo -u acmeuser zfs set readonly=on tank/mydata cannot mount 'tank/mydata': Insuffic...
zfs get readonly tank/mydata
NAME PROPERTY VALUE SOURCE
tank/mydata readonly off local
zfs set readonly=on tank/mydata
echo $?
0
zfs get readonly tank/mydata
NAME PROPERTY VALUE SOURCE
tank/mydata readonly on local
sudo -u acmeuser zfs set readonly=on tank/mydata
cannot mount 'tank/mydata': Insufficient privileges
property may be set but unable to remount filesystem
echo $?
255
I would like to get this user to be able to run this command. How do I get a zero returned?
Note: I don't actually need to be able to change the readonly state, just for the command to set it to ON when it is already ON to return a zero status rather than 255.
This has to do with a script that executes this command even if readonly already is set to ON which would be difficult to change.
Note that:
zfs allow tank/mydata
returns both the
mount
and the readonly
permissions. Which others are necessary?
npr_se
(43 rep)
Mar 24, 2025, 02:22 PM
• Last activity: May 24, 2025, 01:03 AM
0
votes
1
answers
479
views
How to replicate the posix acl default on zfs/nfsv4 acl on Solaris?
Suppose I want a dir, which all files and directories created inside has the group permission of the group owner of the dir, and 770 as default permission. With posix ACL is really easy #create a dir.. mkdir proof #inherit group permission "video" in this example chmod g+s proof/ chgrp video proof/...
Suppose I want a dir, which all files and directories created inside
has the group permission of the group owner of the dir, and 770 as
default permission.
With posix ACL is really easy
#create a dir..
mkdir proof
#inherit group permission "video" in this example
chmod g+s proof/
chgrp video proof/
#with setfacl make the default group with rxw permissions
setfacl -d -m g:video:rwx proof
#other are not allowed
setfacl -d -m o:--- proof/
chmod o-x proof
#give the acl
setfacl -m g:video:rwx proof
Now I create a file and a dir inside the dir proof..
mkdir try1
drwxrws---+ 2 myuser video 4,0K feb 23 01:26 try1
touch file1
-rw-rw----+ 1 myuser video 0 feb 23 01:29 file1
As you can see I obtain what I want, all files in the dir
inherit permissions and has the group "video" as group owner.
This is possible on Linux (posix acl on ext4, btrfs, etc..)
and Solaris (ufs).
Now the question..how to do this with zfs which use nfsv4 acl
on Solaris?
I have tried this making another dir "proof" in a zfs Solaris 11 host
(of course chmod g+s was made)
chmod A=owner@:read_attributes/read_data/execute/list_directory/read_data/write_data/append_data/execute/add_file/add_subdirectory:fd:allow,group:video:read_attributes/read_data/execute/list_directory/read_data/write_data/append_data/execute/add_file/add_subdirectory:fd:allow,everyone@:read_attributes/read_data/execute/list_directory/read_data/write_data/append_data/execute/add_file/add_subdirectory:fd:deny proof
but the result is..
mkdir newdir
drwxr-sr-x+ 2 myuser video 2 23 feb 02.33 newdir
:|
How to obtain the same of posix acl? Thanks
elbarna
(13690 rep)
Feb 23, 2023, 12:35 AM
• Last activity: May 20, 2025, 08:06 AM
2
votes
1
answers
468
views
Zfs file system with two roots that can be selected at boot
I have Ubuntu 20.10 on zfs root pool (upgraded from 20.04). I tried to restore 20.04 using `zsys` features at boot, which was unsuccessful. I have a backup (created using `syncoid`). Using `zfs send/receive`, I created a new pool in a separate disk called spool. It has restored 20.04 before the upgr...
I have Ubuntu 20.10 on zfs root pool (upgraded from 20.04).
I tried to restore 20.04 using
zsys
features at boot, which was unsuccessful.
I have a backup (created using syncoid
). Using zfs send/receive
, I created a new pool in a separate disk called spool. It has restored 20.04 before the upgrade.
I'd like to be able to choose at boot time between the two pools. How do I set it up with grub
.
user63726
(121 rep)
Oct 24, 2020, 11:21 PM
• Last activity: May 18, 2025, 04:06 PM
7
votes
7
answers
15708
views
/etc/crypttab not updating in initramfs
I have a new installation of ubuntu 22.04, with full disk encryption (LUKS) and ZFS picked from the ubuntu installer options. I need to make some edits to `/etc/crypttab` so that unlocking my drives works in an automatic way (fancy usb auto unlock), but the edits I'm making to `/etc/crypttab` aren't...
I have a new installation of ubuntu 22.04, with full disk encryption (LUKS) and ZFS picked from the ubuntu installer options.
I need to make some edits to
/etc/crypttab
so that unlocking my drives works in an automatic way (fancy usb auto unlock), but the edits I'm making to /etc/crypttab
aren't persisting to initramfs.
What I'm doing is:
- Editing /etc/crypttab
- Running update-initramfs -u
- Rebooting my machine into the the system that asks for the LUKS password (initramfs)
- Checking the contents of /etc/
but crypptab isn't there.
Have I got my concept of how this works incorrect? I need to persist some version of crypttab to the loader but it isn't working.
Any pointers to what I'm doing wrong?
Bob Arezina
(171 rep)
Jul 3, 2022, 10:16 AM
• Last activity: May 18, 2025, 04:06 PM
2
votes
0
answers
66
views
Automatic cloning and iscsi sharing of zfs dataset on connect
Since iSCSI was not designed to support multiple concurrent clients accessing the same dataset, is it technically possible without modifying code/recompiling (but maybe create scripts and systemd sockets?) to snapshot and clone a dataset for each client, presenting it to the requesting client, and d...
Since iSCSI was not designed to support multiple concurrent clients accessing the same dataset, is it technically possible without modifying code/recompiling (but maybe create scripts and systemd sockets?) to snapshot and clone a dataset for each client, presenting it to the requesting client, and destroy it when the client disconnect?
The data would be lost at disconnect unless the main version of the dataset was mounted which would be okay as it is intended to be used as read-only.
Zulgrib
(1034 rep)
Oct 18, 2022, 12:09 AM
• Last activity: May 15, 2025, 03:07 PM
Showing page 1 of 20 total questions