Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
52
views
How can I see how much space was freed by trim on an SSD?
In my current setup, I have three different filesystems on two different SSDs: A FAT partition and a BTRFS partition on one drive, and ext4 on a second drive. When running `fstrim`, [the output is apparently](https://www.reddit.com/r/linuxquestions/comments/vaahg7/comment/ic1es8n/?utm_source=share&u...
In my current setup, I have three different filesystems on two different SSDs: A FAT partition and a BTRFS partition on one drive, and ext4 on a second drive. When running
fstrim
, [the output is apparently](https://www.reddit.com/r/linuxquestions/comments/vaahg7/comment/ic1es8n/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) [not very usable](https://superuser.com/a/1251947/277646) and basically each of those filesystems reports some meaningless value for the amount that got trimmed.
Since truly [free space on an SSD contributes to its performance](https://cdn.mos.cms.futurecdn.net/3XW98AqWgfM956j5FGcodL.png) , at least for QLC NAND modules that use an SLC cache, I wanted to see if I could determine the impact of running fstrim
.
I know that utilities like df
and duf
, as well as lsblk
provide usage information based on the filesystem, but are there any utilities that can show the drive sectors that are in use vs free?
If my understanding of how TRIM on an SSD works is correct, then the filesystem will show reduced space immediately upon deleting a file, but those sectors are still considered in use by the SSD controller. After TRIM, those sectors would be freed. I'm hoping for a way to see the extent of that
Hari
(130 rep)
Jul 27, 2025, 05:24 AM
• Last activity: Jul 29, 2025, 09:28 AM
-1
votes
1
answers
61
views
Does periodic trim work on SSD connected via USB 2.0?
It looks like my OS is set to periodically trim portable SSD drives, and TRIM is supported by my portable SSD. I connect this SSD via its usbc cable, attached to the usbc to usb3 adapter that came in the box with it, then connected to the usb2 port on my laptop. Is trim actually executed? I read tha...
It looks like my OS is set to periodically trim portable SSD drives, and TRIM is supported by my portable SSD. I connect this SSD via its usbc cable, attached to the usbc to usb3 adapter that came in the box with it, then connected to the usb2 port on my laptop. Is trim actually executed? I read that trim commands are not supported through usb 2.0... for example here is written that usb2 does not support trim: https://wiki.gentoo.org/wiki/Discard_over_USB (section prerequisites). Also https://en.m.wikipedia.org/wiki/USB_Attached_SCSI says that it depends on the hardware and sometimes usb hub... Does it mean that I have to investigate specifically my motherboard?
user324831
(113 rep)
Jun 2, 2025, 12:17 PM
• Last activity: Jun 2, 2025, 06:36 PM
0
votes
1
answers
73
views
Are portable USB-connected SSD automatically trimmed by Fedora?
After reading online I was under the impression that TRIM is not *automatically* (periodically) sent over to my USBconnected SSD by Fedora; but some discussions on this forum make me doubt this. Anyone has clarifications? Thank you so much.
After reading online I was under the impression that TRIM is not *automatically* (periodically) sent over to my USBconnected SSD by Fedora; but some discussions on this forum make me doubt this. Anyone has clarifications? Thank you so much.
user324831
(113 rep)
Jun 1, 2025, 09:18 AM
• Last activity: Jun 1, 2025, 10:16 AM
1
votes
1
answers
49
views
How to copy partition with trimming?
I want to clopy a partition between SSD devices. Normally I could do it simply with the "dd" or "buffer" commands. However, now that I have SSDs, I also would like to miss the unneeded writes. *I want all all-zero blocks handled as trimmed blocks.* Is there, ideally a command line tool, to do that?
I want to clopy a partition between SSD devices. Normally I could do it simply with the "dd" or "buffer" commands.
However, now that I have SSDs, I also would like to miss the unneeded writes.
*I want all all-zero blocks handled as trimmed blocks.*
Is there, ideally a command line tool, to do that?
peterh
(10448 rep)
Feb 24, 2025, 01:50 PM
• Last activity: Feb 24, 2025, 05:08 PM
4
votes
2
answers
1459
views
Does fstrim have the same effect as overwriting the disk space with zeros?
I create compressed backup images using `dd` and `lzop` on a regular basis. In the past I've created a large file beforehand with zeros using `head -c xxG /dev/zero > zeros.tmp && rm zeros.tmp` (xxG was the empty space available) to clean out deleted files. This saved a lot of time and backup disk s...
I create compressed backup images using
dd
and lzop
on a regular basis.
In the past I've created a large file beforehand with zeros using head -c xxG /dev/zero > zeros.tmp && rm zeros.tmp
(xxG was the empty space available) to clean out deleted files.
This saved a lot of time and backup disk space, but it's also highly ineffective since it goes all over the empty space and therefore causes unnecessary wear on SSDs.
Does fstrim also zero blocks with unallocated files?
Or will dd return these blocks with their deleted content afterwards?
I don't want to compress blocks with unallocated files.
lmoly
(427 rep)
Feb 3, 2020, 10:16 AM
• Last activity: Dec 10, 2024, 04:57 PM
1
votes
0
answers
52
views
Raspberry Pi CM4. Is it possible fstrim deleted/corrupted needed files?
I have a problem. On my device (Raspberry Pi CM4) my programm uses /var/opt directory (files created/downloaded manually). After some hard reset my program cant see couple of files, I go into the log and see these rows: Aug 22 12:23:26 systemd[1]: Starting Discard unused blocks on filesystems from /...
I have a problem. On my device (Raspberry Pi CM4) my programm uses /var/opt directory (files created/downloaded manually). After some hard reset my program cant see couple of files, I go into the log and see these rows:
Aug 22 12:23:26 systemd: Starting Discard unused blocks on filesystems from /etc/fstab...
Aug 22 12:23:26 fstrim: /: 1.7 GiB (1784496128 bytes) trimmed on /dev/mmcblk0p2
Aug 22 12:23:26 fstrim: /boot: 204.5 MiB (214468608 bytes) trimmed on /dev/mmcblk0p1
Aug 22 12:23:26 systemd: fstrim.service: Succeeded.
Aug 22 12:23:26 systemd: Finished Discard unused blocks on filesystems from /etc/fstab.
After these rows, in this time of boot my program broken (needed file was not found).
Is it possible that fstrim deleted my files? I dont see any other reasons from logs.
I did see these rows earlier (another date):
Aug 19 00:06:01 systemd: Starting Discard unused blocks on filesystems from /etc/fstab...
Aug 19 00:06:01 fstrim: /: 389.6 MiB (408547328 bytes) trimmed on /dev/mmcblk0p2
Aug 19 00:06:01 fstrim: /boot: 204.5 MiB (214468608 bytes) trimmed on /dev/mmcblk0p1
Aug 19 00:06:01 systemd: fstrim.service: Succeeded.
Aug 19 00:06:01 systemd: Finished Discard unused blocks on filesystems from /etc/fstab.
But after them all was OK.
Aug 19 389MB, Aug 22 1.7GB o_O
I used mmc-utils for cheking of eMMC health:
=============================================
Extended CSD rev 1.8 (MMC 5.1)
=============================================
Card Supported Command sets [S_CMD_SET: 0x01]
HPI Features [HPI_FEATURE: 0x01]: implementation based on CMD13
Background operations support [BKOPS_SUPPORT: 0x01]
Max Packet Read Cmd [MAX_PACKED_READS: 0x3f]
Max Packet Write Cmd [MAX_PACKED_WRITES: 0x3f]
Data TAG support [DATA_TAG_SUPPORT: 0x01]
Data TAG Unit Size [TAG_UNIT_SIZE: 0x02]
Tag Resources Size [TAG_RES_SIZE: 0x00]
Context Management Capabilities [CONTEXT_CAPABILITIES: 0x05]
Large Unit Size [LARGE_UNIT_SIZE_M1: 0x07]
Extended partition attribute support [EXT_SUPPORT: 0x03]
Generic CMD6 Timer [GENERIC_CMD6_TIME: 0x0a]
Power off notification [POWER_OFF_LONG_TIME: 0x3c]
Cache Size [CACHE_SIZE] is 65536 KiB
Background operations status [BKOPS_STATUS: 0x00]
1st Initialisation Time after programmed sector [INI_TIMEOUT_AP: 0x1e]
Power class for 52MHz, DDR at 3.6V [PWR_CL_DDR_52_360: 0x00]
Power class for 52MHz, DDR at 1.95V [PWR_CL_DDR_52_195: 0x00]
Power class for 200MHz at 3.6V [PWR_CL_200_360: 0x00]
Power class for 200MHz, at 1.95V [PWR_CL_200_195: 0x00]
Minimum Performance for 8bit at 52MHz in DDR mode:
[MIN_PERF_DDR_W_8_52: 0x00]
[MIN_PERF_DDR_R_8_52: 0x00]
TRIM Multiplier [TRIM_MULT: 0x02]
Secure Feature support [SEC_FEATURE_SUPPORT: 0x55]
Boot Information [BOOT_INFO: 0x07]
Device supports alternative boot method
Device supports dual data rate during boot
Device supports high speed timing during boot
Boot partition size [BOOT_SIZE_MULTI: 0x20]
Access size [ACC_SIZE: 0x07]
High-capacity erase unit size [HC_ERASE_GRP_SIZE: 0x01]
i.e. 512 KiB
High-capacity erase timeout [ERASE_TIMEOUT_MULT: 0x01]
Reliable write sector count [REL_WR_SEC_C: 0x01]
High-capacity W protect group size [HC_WP_GRP_SIZE: 0x10]
i.e. 8192 KiB
Sleep current (VCC) [S_C_VCC: 0x07]
Sleep current (VCCQ) [S_C_VCCQ: 0x07]
Sleep/awake timeout [S_A_TIMEOUT: 0x11]
Sector Count [SEC_COUNT: 0x00e90000]
Device is block-addressed
Minimum Write Performance for 8bit:
[MIN_PERF_W_8_52: 0x00]
[MIN_PERF_R_8_52: 0x00]
[MIN_PERF_W_8_26_4_52: 0x00]
[MIN_PERF_R_8_26_4_52: 0x00]
Minimum Write Performance for 4bit:
[MIN_PERF_W_4_26: 0x00]
[MIN_PERF_R_4_26: 0x00]
Power classes registers:
[PWR_CL_26_360: 0x00]
[PWR_CL_52_360: 0x00]
[PWR_CL_26_195: 0x00]
[PWR_CL_52_195: 0x00]
Partition switching timing [PARTITION_SWITCH_TIME: 0x02]
Out-of-interrupt busy timing [OUT_OF_INTERRUPT_TIME: 0x0a]
I/O Driver Strength [DRIVER_STRENGTH: 0x1f]
Card Type [CARD_TYPE: 0x57]
HS200 Single Data Rate eMMC @200MHz 1.8VI/O
HS Dual Data Rate eMMC @52MHz 1.8V or 3VI/O
HS eMMC @52MHz - at rated device voltage(s)
HS eMMC @26MHz - at rated device voltage(s)
CSD structure version [CSD_STRUCTURE: 0x02]
Command set [CMD_SET: 0x00]
Command set revision [CMD_SET_REV: 0x00]
Power class [POWER_CLASS: 0x00]
High-speed interface timing [HS_TIMING: 0x01]
Erased memory content [ERASED_MEM_CONT: 0x00]
Boot configuration bytes [PARTITION_CONFIG: 0x00]
Not boot enable
No access to boot partition
Boot config protection [BOOT_CONFIG_PROT: 0x00]
Boot bus Conditions [BOOT_BUS_CONDITIONS: 0x00]
High-density erase group definition [ERASE_GROUP_DEF: 0x01]
Boot write protection status registers [BOOT_WP_STATUS]: 0x00
Boot Area Write protection [BOOT_WP]: 0x00
Power ro locking: possible
Permanent ro locking: possible
ro lock status: not locked
User area write protection register [USER_WP]: 0x00
FW configuration [FW_CONFIG]: 0x00
RPMB Size [RPMB_SIZE_MULT]: 0x04
Write reliability setting register [WR_REL_SET]: 0x1f
user area: the device protects existing data if a power failure occurs during a write operation
partition 1: the device protects existing data if a power failure occurs during a write operation
partition 2: the device protects existing data if a power failure occurs during a write operation
partition 3: the device protects existing data if a power failure occurs during a write operation
partition 4: the device protects existing data if a power failure occurs during a write operation
Write reliability parameter register [WR_REL_PARAM]: 0x14
Device supports the enhanced def. of reliable write
Enable background operations handshake [BKOPS_EN]: 0x00
H/W reset function [RST_N_FUNCTION]: 0x00
HPI management [HPI_MGMT]: 0x01
Partitioning Support [PARTITIONING_SUPPORT]: 0x07
Device support partitioning feature
Device can have enhanced tech.
Max Enhanced Area Size [MAX_ENH_SIZE_MULT]: 0x0001d2
i.e. 3817472 KiB
Partitions attribute [PARTITIONS_ATTRIBUTE]: 0x00
Partitioning Setting [PARTITION_SETTING_COMPLETED]: 0x00
Device partition setting NOT complete
General Purpose Partition Size
Enhanced User Data Area Size [ENH_SIZE_MULT]: 0x000000
i.e. 0 KiB
Enhanced User Data Start Address [ENH_START_ADDR]: 0x00000000
i.e. 0 bytes offset
Bad Block Management mode [SEC_BAD_BLK_MGMNT]: 0x00
Periodic Wake-up [PERIODIC_WAKEUP]: 0x00
Program CID/CSD in DDR mode support [PROGRAM_CID_CSD_DDR_SUPPORT]: 0x01
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x05
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x01
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0xc8
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0xc8
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x00
Vendor Specific Fields [VENDOR_SPECIFIC_FIELD]: 0x0f
Native sector size [NATIVE_SECTOR_SIZE]: 0x00
Sector size emulation [USE_NATIVE_SECTOR]: 0x00
Sector size [DATA_SECTOR_SIZE]: 0x00
1st initialization after disabling sector size emulation [INI_TIMEOUT_EMU]: 0x00
Class 6 commands control [CLASS_6_CTRL]: 0x00
Number of addressed group to be Released[DYNCAP_NEEDED]: 0x00
Exception events control [EXCEPTION_EVENTS_CTRL]: 0x0000
Exception events status[EXCEPTION_EVENTS_STATUS]: 0x0000
Extended Partitions Attribute [EXT_PARTITIONS_ATTRIBUTE]: 0x0000
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Context configuration [CONTEXT_CONF]: 0x00
Packed command status [PACKED_COMMAND_STATUS]: 0x00
Packed command failure index [PACKED_FAILURE_INDEX]: 0x00
Power Off Notification [POWER_OFF_NOTIFICATION]: 0x01
Control to turn the Cache ON/OFF [CACHE_CTRL]: 0x01
eMMC Firmware Version:
eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x01
eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x01
eMMC Pre EOL information [EXT_CSD_PRE_EOL_INFO]: 0x01
Command Queue Support [CMDQ_SUPPORT]: 0x01
Command Queue Depth [CMDQ_DEPTH]: 16
Command Enabled [CMDQ_MODE_EN]: 0x00
If it was not fstrim, how was files corrupted, what do I need check?
Sorry for my not ideal english
nx4n
(111 rep)
Sep 13, 2024, 04:27 AM
• Last activity: Sep 13, 2024, 02:27 PM
1
votes
1
answers
535
views
fstrim doesn't appear to have any timers, how do I make sure it works?
"The util-linux package provides fstrim.service and fstrim.timer systemd unit files. Enabling the timer will activate the service weekly. The service executes fstrim(8) on all mounted filesystems on devices that support the discard operation." From: https://wiki.archlinux.org/title/Solid_state_drive...
"The util-linux package provides fstrim.service and fstrim.timer systemd unit files. Enabling the timer will activate the service weekly. The service executes fstrim(8) on all mounted filesystems on devices that support the discard operation."
From: https://wiki.archlinux.org/title/Solid_state_drive
I wanted to make sure it runs weekly, as I've read it shouldn't run too often, and not too rarely.
However, it doesn't appear to have any timers:
# systemctl list-timers |grep fstrim
# cat /etc/systemd/system/fstrim.timer
cat: /etc/systemd/system/fstrim.timer: No such file or directory
The fstrim.service file itself is:
# systemctl cat fstrim.service
# /usr/lib/systemd/system/fstrim.service
[Unit]
Description=Discard unused blocks on filesystems from /etc/fstab
Documentation=man:fstrim(8)
ConditionVirtualization=!container
[Service]
Type=oneshot
ExecStart=/usr/bin/fstrim --listed-in /etc/fstab:/proc/self/mountinfo --verbose --quiet-unsupported
PrivateDevices=no
PrivateNetwork=yes
PrivateUsers=no
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
MemoryDenyWriteExecute=yes
SystemCallFilter=@default @file-system @basic-io @system-service
Where is the weekly basis specified? How do I make sure it runs weekly, or at all?
AlphaCentauri
(802 rep)
Dec 1, 2023, 08:22 PM
• Last activity: Dec 2, 2023, 04:08 AM
5
votes
1
answers
1308
views
How to perform fstrim on a loop device?
I have LVM with thin-provisioning enabled. I have two almost identical thin logical volumes with ext4 file systems that slightly differs. The first volume is used entirely for storing file system. On the second volume file system is stored with a small offset. It is mounted with `-o offset=1048576`...
I have LVM with thin-provisioning enabled. I have two almost identical thin logical volumes with ext4 file systems that slightly differs. The first volume is used entirely for storing file system. On the second volume file system is stored with a small offset. It is mounted with
-o offset=1048576
option.
First volume could be cleaned with the fstrim
command, but the second one could not. It gives the error instead:
fstrim: second: the discard operation is not supported
That is because of offset mounting I believe. Receiving offset option mount command creates temporary loop device and mounts it. So the main suspect is loop device.
Is it possible somehow to clean filesystem that has offset to the volume start block address?
ayvango
(397 rep)
Oct 15, 2016, 11:54 PM
• Last activity: Jul 23, 2023, 09:40 AM
11
votes
3
answers
8625
views
Does VirtIO storage support discard (fstrim)?
$ uname -r 5.0.9-301.fc30.x86_64 $ findmnt / TARGET SOURCE FSTYPE OPTIONS / /dev/vda3 ext4 rw,relatime,seclabel $ sudo fstrim -v / fstrim: /: the discard operation is not supported Same VM, but after switching the disk from VirtIO to SATA: $ findmnt / TARGET SOURCE FSTYPE OPTIONS / /dev/sda3 ext4 rw...
$ uname -r
5.0.9-301.fc30.x86_64
$ findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/vda3 ext4 rw,relatime,seclabel
$ sudo fstrim -v /
fstrim: /: the discard operation is not supported
Same VM, but after switching the disk from VirtIO to SATA:
$ findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda3 ext4 rw,relatime,seclabel
$ sudo fstrim -v /
/: 5.3 GiB (5699264512 bytes) trimmed
The virtual disk is backed by a QCOW2 file. I am using virt-manager / libvirt. libvirt-daemon is version 4.7.0-2.fc29.x86_64. My host is currently running a vanilla kernel build 5.1 (ish), so it's a little bit "customized" at the moment, but I built it starting from a stock Fedora kernel configuration.
Is there a way to enable discard support on VirtIO somehow? Or does the code just not support it yet? I don't necessarily require the exact instructions how to enable it, but I am surprised and curious and I would like a solid answer :-).
sourcejedi
(53222 rep)
May 10, 2019, 12:37 PM
• Last activity: Jun 30, 2023, 08:47 PM
0
votes
0
answers
379
views
How do fstrim and BTRFS SSD optimizations work for both SSD's in a RAID1?
First apologies if this has been asked before, but I could not find any link with any combination of keywords. My question is - How do SSD optimisations work in BTRFS in a RAID1 where both devices are SSDs? Also fstrim does not seem to pick up /dev/sdb1. See output at the end of this post. I have /d...
First apologies if this has been asked before, but I could not find any link with any combination of keywords.
My question is - How do SSD optimisations work in BTRFS in a RAID1 where both devices are SSDs?
Also fstrim does not seem to pick up /dev/sdb1. See output at the end of this post.
I have /dev/sda1 and /dev/sdb1 (both partitions on SSDs) in a RAID1 configuration. But I see no message for sdb1 in the following dmesg output.
pi@testpi:~ $ dmesg | grep btrfs
[Thu Apr 20 00:35:06 2023] Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no
[Thu Apr 20 00:35:06 2023] BTRFS: device label nasdisk_01 devid 1 transid 200 /dev/sdd1 scanned by systemd-udevd (209)
[Thu Apr 20 00:35:06 2023] BTRFS: device fsid 69f422a2-fea7-424c-886b-f291068dae9f devid 4 transid 73286 /dev/sdb1 scanned by systemd-udevd (205)
[Thu Apr 20 00:35:06 2023] BTRFS: device fsid 69f422a2-fea7-424c-886b-f291068dae9f devid 3 transid 73286 /dev/sda1 scanned by systemd-udevd (204)
[Thu Apr 20 00:35:06 2023] BTRFS info (device sdd1): using crc32c (crc32c-generic) checksum algorithm
[Thu Apr 20 00:35:06 2023] BTRFS info (device sdd1): setting nodatacow, compression disabled
[Thu Apr 20 00:35:06 2023] BTRFS info (device sdd1): disk space caching is enabled
[Thu Apr 20 00:35:06 2023] BTRFS info (device sda1): using crc32c (crc32c-generic) checksum algorithm
[Thu Apr 20 00:35:06 2023] BTRFS info (device sda1): disk space caching is enabled
[Thu Apr 20 00:35:07 2023] BTRFS info (device sdd1): enabling ssd optimizations
[Thu Apr 20 00:35:07 2023] BTRFS info (device sda1): enabling ssd optimizations
My fstab (partial - non-btrfs entries are removed here) is
PARTUUID=9c860f91-01 /mnt/raid1_01 btrfs defaults,noatime,nodiratime 0 2
PARTUUID=9c860f91-01 /mnt/media btrfs defaults,noatime,nodiratime,subvol=@media 0 2
PARTUUID=9c860f91-01 /mnt/docker-containers btrfs defaults,noatime,nodiratime,subvol=@docker-containers 0 2
PARTUUID=9c860f91-01 /mnt/shared-samba btrfs defaults,noatime,nodiratime,subvol=@shared-samba 0 2
PARTUUID=9c860f91-01 /mnt/shared-onedrive btrfs defaults,noatime,nodiratime,subvol=@shared-onedrive 0 2
PARTUUID=9c860f91-01 /mnt/docker-containers-databases btrfs defaults,noatime,nodiratime,subvol=@docker-containers-databases 0 2
PARTUUID=9c860f91-01 /mnt/work btrfs defaults,noatime,nodiratime,subvol=@work 0 2
Output of mount command is as following
pi@testpi:~ $ mount | grep btrfs
/dev/sdd1 on /mnt/nasdisk_01 type btrfs (rw,noatime,nodiratime,nodatasum,nodatacow,ssd,space_cache,subvolid=5,subvol=/)
/dev/sda1 on /mnt/docker-containers-databases type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=394,subvol=/@docker-containers-databases)
/dev/sda1 on /mnt/shared-samba type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=363,subvol=/@shared-samba)
/dev/sda1 on /mnt/shared-onedrive type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=391,subvol=/@shared-onedrive)
/dev/sda1 on /mnt/work type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=397,subvol=/@work)
/dev/sda1 on /mnt/docker-containers type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=362,subvol=/@docker-containers)
/dev/sda1 on /mnt/media type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=398,subvol=/@media)
/dev/sda1 on /mnt/raid1_01 type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=5,subvol=/)
Output of fstrim which shows that /dev/sdb1 is ignored.
pi@testpi:~ $ sudo fstrim -vA
/mnt/nasdisk_01: 2.2 GiB (2363473920 bytes) trimmed on /dev/sdd1
/mnt/raid1_01: 5 GiB (5337776128 bytes) trimmed on /dev/sda1
/: 2.9 GiB (3139751936 bytes) trimmed on /dev/sdc2
/boot: 201.8 MiB (211645952 bytes) trimmed on /dev/sdc1
Vijay Gill
(101 rep)
Apr 20, 2023, 02:24 PM
2
votes
1
answers
1330
views
Disable allow-discards on encrypted partition
I have the following partition table: ``` NAME nvme0n1 ├─nvme0n1p1 part /boot └─nvme0n1p2 part └─crypt crypt ├─crypt-swap lvm [SWAP] ├─crypt-root lvm / └─crypt-home lvm /home ``` As the drive is an SSD, I would like to perform [TRIM](https://en.wikipedia.org/wiki/Trim_(computing)) command in order t...
I have the following partition table:
NAME
nvme0n1
├─nvme0n1p1 part /boot
└─nvme0n1p2 part
└─crypt crypt
├─crypt-swap lvm [SWAP]
├─crypt-root lvm /
└─crypt-home lvm /home
As the drive is an SSD, I would like to perform [TRIM](https://en.wikipedia.org/wiki/Trim_(computing)) command in order to increase performance/lifetime of the disk itself.
In particular, I would like to enable periodic TRIM.
Because the second partition (i.e., nvme0n1p2
) is encrypted, TRIM will be inhibited because of security implications (https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discard/TRIM_support_for_solid_state_drives_(SSD)) .
However, it is possible to enable TRIM on encrypted partition by configuring encrypt
on the opening.
As I my partition is opened at kernel boot, I've modified kernel parameters (i.e., allow-discards
):
cryptdevice=/dev/sdaX:root:allow-discards
(*Note that the partition naming and volume name are not relevant in the above snippet.*).
By doing that, I was indeed successfully able to run TRIM command on the disk:
# cryptsetup luksDump /dev/nvme0n1p2 | grep Flags
Flags: allow-discards
And:
# fstrim ...
/home: [..] trimmed on ...
/: [..] trimmed on
So far, so good.
---
The problem arose when I tried to restore to the original state.
I have removed the kernel parameter allow-discards
, but Flags
on partition still shows allow-discards
and fstrim
command successfully complete its job.
* How is that possible?
* How to restore denying of discards on the encrypted partition?
BiagioF
(161 rep)
Nov 29, 2022, 10:44 PM
• Last activity: Apr 12, 2023, 06:37 AM
1
votes
1
answers
1917
views
fstrim: /: FITRIM ioctl failed: Structure needs cleaning
On my Debian sid with custom 5.17.0-rc1 kernel installed on my new (< 1 month) SSD nvme WD SN850 my root partition is formatted as f2fs(v 1.14) I get `fstrim: /: FITRIM ioctl failed: Structure needs cleaning` fsck is OK no problems found and OS is running without problems ``` sudo fsck.f2fs /dev/nvm...
On my Debian sid with custom 5.17.0-rc1 kernel
installed on my new (< 1 month) SSD nvme WD SN850
my root partition is formatted as f2fs(v 1.14)
I get
fstrim: /: FITRIM ioctl failed: Structure needs cleaning
fsck is OK no problems found and OS is running without problems
sudo fsck.f2fs /dev/nvme0n1p2
Info: Segments per section = 1
Info: Sections per zone = 1
Info: sector size = 512
Info: total sectors = 976566287 (476839 MB)
Info: MKFS version
"Linux version 5.5.0-rc6 (u1@jeanordi) (gcc version 9.2.1 20191130 (Debian 9.2.1-21)) #1 SMP PREEMPT Thu Jan 16 00:24:17 CET 2020"
Info: FSCK version
from "Linux version 5.5.0-rc6 (u1@jeanordi) (gcc version 9.2.1 20191130 (Debian 9.2.1-21)) #1 SMP PREEMPT Thu Jan 16 00:24:17 CET 2020"
to "Linux version 5.5.0-rc6 (u1@jeanordi) (gcc version 9.2.1 20191130 (Debian 9.2.1-21)) #1 SMP PREEMPT Thu Jan 16 00:24:17 CET 2020"
Info: superblock features = 0 :
Info: superblock encrypt level = 0, salt = 00000000000000000000000000000000
Info: total FS sectors = 976566280 (476839 MB)
Info: CKPT version = 1d9fb4bd
Info: checkpoint state = 55 : crc fsck compacted_summary unmount
[FSCK] Unreachable nat entries [Ok..] [0x0]
[FSCK] SIT valid block bitmap checking [Ok..]
[FSCK] Hard link checking for regular file [Ok..] [0xb]
[FSCK] valid_block_count matching with CP [Ok..] [0x44993b]
[FSCK] valid_node_count matching with CP (de lookup) [Ok..] [0x16c2c]
[FSCK] valid_node_count matching with CP (nat lookup) [Ok..] [0x16c2c]
[FSCK] valid_inode_count matched with CP [Ok..] [0x15d7c]
[FSCK] free segment_count matched with CP [Ok..] [0x374d7]
[FSCK] next block offset is free [Ok..]
[FSCK] fixing SIT types
[FSCK] other corrupted bugs [Ok..]
Done: 2.975766 secs
Jean Molinier
(63 rep)
Feb 4, 2022, 05:41 AM
• Last activity: Feb 5, 2022, 03:24 AM
1
votes
1
answers
2711
views
How to disable TRIM on SSD(s) under Linux Mint?
I rarely ever write anything (large) to SSDs on many machines of my own, an example could be the use of one laptop as a TV *viewer* only, another my mother's laptop, which she uses just for banking. (If it matters, I use Linux Mint on all machines.) Therefore it comes as annoying the weekly TRIM, as...
I rarely ever write anything (large) to SSDs on many machines of my own, an example could be the use of one laptop as a TV *viewer* only, another my mother's laptop, which she uses just for banking. (If it matters, I use Linux Mint on all machines.)
Therefore it comes as annoying the weekly TRIM, as it takes a rather long time. But how to disable it?
Vlastimil Burián
(30505 rep)
Jan 10, 2022, 05:21 AM
1
votes
0
answers
477
views
LVM SSD cachepool how to trim (Debian 10)
I was playing with LVM and I was setting up a volume with ext4 on an HDD and I want to use an SSD as a cache. What I first done was lvcreate --type cache-pool -l 100%FREE -n datacache SSD /dev/sda3 lvcreate --type cache -l 100%FREE -n data --cachepool datacache SSD --cachemode writeback /dev/sdb1 So...
I was playing with LVM and I was setting up a volume with ext4 on an HDD and I want to use an SSD as a cache.
What I first done was
lvcreate --type cache-pool -l 100%FREE -n datacache SSD /dev/sda3
lvcreate --type cache -l 100%FREE -n data --cachepool datacache SSD --cachemode writeback /dev/sdb1
So respectivily I created a cache-pool voume called datacache and than a vomume called data attached to datacache.
I think (the problem is I don't remember) that fstrim works perfectly in this way.
Then I had to do some operations on disk and I had to deattach my cache. So I uncached my disk
lvconvert --uncache /dev/mapper/SSD-data
Then I moved the disk (and other thinks) and at the end I reattached my disk to a cache-pool
lvcreate --type cache-pool -l 100%FREE -n datacache SSD /dev/sda3
lvconvert --type cache --cachemode writeback --cachepool datacache SSD/data
The problem is that now if I fstrim i obtain
root@me:~# fstrim -a
fstrim: /mnt/data: FITRIM ioctl failed: Argument not valid
I am not completly sure that before detaching and reattaching the cache it worked, but I suppose to because I tested fstrim and I don't remember any errors.
I am using debian buster.
root@me:~# uname -a
Linux me 4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
Thank you in advice. If I missed some important information I am really sorry. Tell me what you need and I'll post it :D
Ruggero
(11 rep)
Mar 18, 2021, 11:27 AM
1
votes
1
answers
391
views
fstrim.service blocks trim of /home in Ubuntu/Mint v20, but not in v18
In troubleshooting why my /home partition (ext4, luks-encrypted) wasn't being trimmed by the weekly **fstrim** service, I discovered that Ubuntu made a major change in the service file: **On Ubuntu/Mint 18** [Unit] Description=Discard unused blocks ConditionVirtualization=!container [Service] Type=o...
In troubleshooting why my /home partition (ext4, luks-encrypted) wasn't being trimmed by the weekly **fstrim** service, I discovered that Ubuntu made a major change in the service file:
**On Ubuntu/Mint 18**
[Unit]
Description=Discard unused blocks
ConditionVirtualization=!container
[Service]
Type=oneshot
ExecStart=/sbin/fstrim -av
**On Ubuntu/Mint 20**
[Unit]
Description=Discard unused blocks on filesystems from /etc/fstab
Documentation=man:fstrim(8)
ConditionVirtualization=!container
[Service]
Type=oneshot
ExecStart=/sbin/fstrim --fstab --verbose --quiet
ProtectSystem=strict
ProtectHome=yes
PrivateDevices=no
PrivateNetwork=yes
PrivateUsers=no
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
MemoryDenyWriteExecute=yes
SystemCallFilter=@default @file-system @basic-io @system-service
Can someone please explain why the change? I can, of course, manually change the
ProtectHome=yes
setting to ProtectHome=no
. But why was this introduced in the latest Ubuntu? Is there a problem trimming /home?
ajgringo619
(3584 rep)
Jan 21, 2021, 05:08 PM
• Last activity: Jan 21, 2021, 05:29 PM
-1
votes
1
answers
4232
views
"systemctl start fstrim.timer" for ssd optimization not working on Debian 10
i just installed a 1TB kingston SSD, i cloned my HDD (with Debian 10) so nothing changed besides the performance, i want to set up TRIM for this SSD. So i've done this: ``` $ sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) $ sudo systemctl cat fstrim.serv...
i just installed a 1TB kingston SSD, i cloned my HDD (with Debian 10) so nothing changed besides the performance, i want to set up TRIM for this SSD.
So i've done this:
$ sudo hdparm -I /dev/sda | grep -i TRIM
* Data Set Management TRIM supported (limit 8 blocks)
$ sudo systemctl cat fstrim.service
# /lib/systemd/system/fstrim.service
[Unit]
Description=Discard unused blocks on filesystems from /etc/fstab
Documentation=man:fstrim(8)
[Service]
Type=oneshot
ExecStart=/sbin/fstrim -Av
$ sudo systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: inactive (dead)
Trigger: n/a
Docs: man:fstrim
As you can see in the output from the third command fstrim.timer is inactive, so to activate it i think i should do this:
$ sudo systemctl enable fstrim.service
which outputs:
The unit files have no installation config (WantedBy=, RequiredBy=, Also=,
Alias= settings in the [Install] section, and DefaultInstance= for template
units). This means they are not meant to be enabled using systemctl.
Possible reasons for having this kind of units are:
• A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
• A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
• A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
• In case of template units, the unit is meant to be enabled with some
instance name specified.
and
$ sudo systemctl start fstrim.timer
which outputs:
Failed to start fstrim.timer: Unit -.mount is masked.
So i don't know what is happening there, i've already searched how to fix this but i can't find a proper answer, i hope someone can help me here, thank you in advance.
Marco
(3 rep)
Oct 19, 2020, 08:55 PM
• Last activity: Oct 20, 2020, 05:24 PM
2
votes
0
answers
172
views
Does Linux fstrim.service conflict with Windows 10 optimization?
I have the following disk setup, a dual-boot with Windows 10 and Linux Mint 19.3: NAME LABEL SIZE FSTYPE MOUNTPOINT UUID sda 477G ├─sda1 100M vfat /boot/efi AE89-82D6 ├─sda2 16M ├─sda3 WINDOWS-C 476.3G ntfs A4DC8D83DC8D508A └─sda4 505M ntfs BE1AB9201AB8D69D sdb 1.8T ├─sdb1 1G ext2 /boot c360756e-933...
I have the following disk setup, a dual-boot with Windows 10 and Linux Mint 19.3:
NAME LABEL SIZE FSTYPE MOUNTPOINT UUID
sda 477G
├─sda1 100M vfat /boot/efi AE89-82D6
├─sda2 16M
├─sda3 WINDOWS-C 476.3G ntfs A4DC8D83DC8D508A
└─sda4 505M ntfs BE1AB9201AB8D69D
sdb 1.8T
├─sdb1 1G ext2 /boot c360756e-9335-476d-889c-e10026a48c77
├─sdb2 LINUX-HOME 500G ext4 /home 96ce746c-1934-4774-88df-c9c132604795
├─sdb3 LINUX-BACKUP 1.2T ext4 /media/Backup-Data 36e45e26-751e-45d8-85b3-e4956829b821
├─sdb4 80G btrfs /mnt/timeshift/backup a2d7fadb-ca2a-4ac6-b4cf-4d1ff0c3f693
└─sdb5 16G swap [SWAP] 93b4b3b4-d3b4-4af6-8a12-e34b26bbe391
The only filesystem that is mounted by both systems at bootup is the EFI partition.
After reading about the SSD optimization bug in Windows 10 v2004, I disabled it but ran it manually once (on 8/31/20). When I next rebooted into Linux, it was due for its weekly fstrim. I checked the logs and found these entries:
$ journalctl -u fstrim.service
-- Logs begin at Sun 2020-08-23 18:36:10 PDT, end at Tue 2020-09-01 16:28:51 PDT. --
Aug 24 08:23:19 dss-mint systemd: Starting Discard unused blocks...
Aug 24 08:27:29 dss-mint fstrim: /home: 406.1 GiB (436010364928 bytes) trimmed
Aug 24 08:27:29 dss-mint fstrim: /media/Backup-Data: 979.6 GiB (1051850047488 bytes) trimmed
Aug 24 08:27:29 dss-mint fstrim: /boot/efi: 63.9 MiB (66948096 bytes) trimmed
Aug 24 08:27:29 dss-mint fstrim: /boot: 838.4 MiB (879099904 bytes) trimmed
Aug 24 08:27:29 dss-mint fstrim: /: 72.1 GiB (77447188480 bytes) trimmed
Aug 24 08:27:29 dss-mint systemd: Started Discard unused blocks.
-- Reboot --
Aug 31 08:25:36 dss-mint systemd: Starting Discard unused blocks...
Aug 31 08:29:03 dss-mint systemd: fstrim.service: Main process exited, code=killed, status=15/TERM
Aug 31 08:29:03 dss-mint systemd: fstrim.service: Failed with result 'signal'.
Aug 31 08:29:03 dss-mint systemd: Stopped Discard unused blocks.
Not finding anything on these messages, I reran the service and everything reported OK. Should I be excluding the EFI partition from one or the other system? Is this the reason why the run on the 31st did not complete? There are no disk errors (as reported by Gnome Disks Smart tests).
UPDATE #1: I checked output from the
fstrim.timer
unit, and it doesn't seem to detect that the fstrim.service unit ever finishes; the only shutdown is caused by a reboot (8/23/2020 is the date the system was installed):
$ journalctl --no-pager -u fstrim.timer
-- Logs begin at Sun 2020-08-23 18:36:10 PDT, end at Wed 2020-09-02 09:50:35 PDT. --
Aug 23 18:36:10 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 18:45:59 dss-mint systemd: Stopped Discard unused blocks once a week.
Aug 23 18:45:59 dss-mint systemd: Stopping Discard unused blocks once a week.
Aug 23 18:45:59 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 18:46:00 dss-mint systemd: Stopped Discard unused blocks once a week.
Aug 23 18:46:00 dss-mint systemd: Stopping Discard unused blocks once a week.
Aug 23 18:46:00 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 18:51:30 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 23 18:51:57 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 19:23:18 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 23 19:23:47 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 19:26:24 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 23 19:26:52 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 19:38:32 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 23 19:39:05 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 19:44:21 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 23 19:44:58 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 19:46:04 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 23 19:46:34 dss-mint systemd: Started Discard unused blocks once a week.
Aug 23 20:20:20 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 24 08:23:19 dss-mint systemd: Started Discard unused blocks once a week.
Aug 24 14:02:00 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 24 14:05:17 dss-mint systemd: Started Discard unused blocks once a week.
Aug 24 19:13:32 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 25 06:40:55 dss-mint systemd: Started Discard unused blocks once a week.
Aug 25 19:46:07 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 26 08:26:55 dss-mint systemd: Started Discard unused blocks once a week.
Aug 26 10:57:15 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 26 10:57:43 dss-mint systemd: Started Discard unused blocks once a week.
Aug 26 11:28:42 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 26 11:29:19 dss-mint systemd: Started Discard unused blocks once a week.
Aug 26 18:05:41 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 26 18:56:28 dss-mint systemd: Started Discard unused blocks once a week.
Aug 26 18:59:27 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 27 08:40:46 dss-mint systemd: Started Discard unused blocks once a week.
Aug 27 17:32:46 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 28 06:59:36 dss-mint systemd: Started Discard unused blocks once a week.
Aug 28 19:05:33 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 29 08:18:49 dss-mint systemd: Started Discard unused blocks once a week.
Aug 29 19:05:01 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 30 08:55:41 dss-mint systemd: Started Discard unused blocks once a week.
Aug 30 19:03:48 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 31 08:25:36 dss-mint systemd: Started Discard unused blocks once a week.
Aug 31 08:29:03 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Aug 31 08:29:35 dss-mint systemd: Started Discard unused blocks once a week.
Aug 31 19:24:53 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Sep 01 08:20:41 dss-mint systemd: Started Discard unused blocks once a week.
Sep 01 19:12:46 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Sep 02 07:11:27 dss-mint systemd: Started Discard unused blocks once a week.
Sep 02 09:33:51 dss-mint systemd: Stopped Discard unused blocks once a week.
-- Reboot --
Sep 02 09:34:20 dss-mint systemd: Started Discard unused blocks once a week.
ajgringo619
(3584 rep)
Sep 1, 2020, 11:33 PM
• Last activity: Sep 2, 2020, 04:58 PM
4
votes
1
answers
677
views
How does fstrim interact with dd
I'd like to understand the interaction between [fstrim][1] and a file system driver (such as ext4). More precisely I'd like to understand whether [dd][2] interfere with this. I've queried elsewhere about how the file system informs an SSD when a block is not used. This is important because it can he...
I'd like to understand the interaction between fstrim and a file system driver (such as ext4). More precisely I'd like to understand whether dd interfere with this.
I've queried elsewhere about how the file system informs an SSD when a block is not used. This is important because it can help with wear levelling. This was my earlier question
When a partition is copied with
dd
or equivalent, every byte of every block gets copied regardless of whether it's used by the file system. Besides being slower on mostly empty file systems, this also tells the disk to store data in unused areas of the file system.
Will fstrim
recover from this or is it incremental, only discarding after a file is deleted?
(Meta question) Is it now recommended that users call fstrim
or similar after using dd
to copy a disk?
Philip Couling
(20391 rep)
Jul 8, 2020, 01:43 PM
• Last activity: Jul 8, 2020, 10:04 PM
3
votes
1
answers
9297
views
How to enable discards on encrypted root
I have installed system on *ext4 filesystem* on *lvm* (vg name `encrypted`, root is called `encrypted-root`) on *luks*. When I'm trying to run `fstrim /`, I get `fstrim: /: the discard operation is not supported`. My `/etc/crypttab` contains cryptroot UUID=5ddb7e3a-dcbe-442d-85e8-359e944d0717 none l...
I have installed system on *ext4 filesystem* on *lvm* (vg name
encrypted
, root is called encrypted-root
) on *luks*. When I'm trying to run fstrim /
, I get fstrim: /: the discard operation is not supported
.
My /etc/crypttab
contains
cryptroot UUID=5ddb7e3a-dcbe-442d-85e8-359e944d0717 none luks,discard,lvm=encrypted
/etc/lvm/lvm.conf
contains
issue_discards = 1
/etc/initramfs-tools/conf.d/cryptroot
contains only
CRYPTROOT=target=encrypted-root,source=/dev/disk/by-uuid/5ddb7e3a-dcbe-442d-85e8-359e944d0717
(I used update-initramfs -k all -c
to create initramfs).
/etc/default/grub
contains
GRUB_CMDLINE_LINUX="cryptops=target=encrypted-root,source=/dev/disk/by-uuid/5ddb7e3a-dcbe-442d-85e8-359e944d0717,lvm=encrypted"
I have tried to manually put rd.luks.options=discard
as parameter for linux in grub. I have tried refresh
option of cryptsetup
utility (cryptsetup --allow-discards refresh
*device
*), but it does not seem to have one (cryptsetup: Unknown action
).
Physical device apparently has TRIM
support, when I run fstrim /boot
it works (it's same device, just not encrypted).
dmsetup table
command does not show allow_discards
for cryptroot
.
When I boot from USB and manually decrypt (with --allow-discards
argument to cryptsetup
) and mount root partition, it works. I have tried to use --persistent
option, but it said that it couldn't make it persistent.
I'm lost. What should I do to make fstrim /
work? Something tells me I should somehow modify boot options in grub, but I'm not sure how. I'm also not sure if line in /etc/crypttab is used at all (I changed it to cryptroot
after install, it seems to do nothing even if I change it).
I'm running *Linux Mint 19.3*.
**Links:**
* I have followed this guide to encrypt system: [link](https://jschumacher.info/2016/11/encrypt-an-existing-linux-installation-with-luks-and-lvm/)
* [Arch-wiki section](https://wiki.archlinux.org/index.php/Dm-crypt/Specialties#Discard/TRIM_support_for_solid_state_drives_(SSD)) about SSDs and dm-crypt
* possibly relevant: (https://unix.stackexchange.com/questions/341442/luks-discard-trim-conflicting-kernel-command-line-options) , (https://unix.stackexchange.com/questions/206754/difference-between-cryptopts-and-crypttab)
user224348
(446 rep)
Mar 23, 2020, 11:43 AM
• Last activity: Jun 20, 2020, 05:18 PM
10
votes
3
answers
5731
views
How do I check if my ssd supports fstrim?
I'm working with a linux server and wanted to know if there is a way that I can find out that my SSD supports fstrim or not. I tried ```hdparm -I /dev/sda```, but it's not available and I can't install it. Is there any other way I can do it? Any help is appreciated. Thanks! I'm also curious to know...
I'm working with a linux server and wanted to know if there is a way that I can find out that my SSD supports fstrim or not. I tried
-I /dev/sda
, but it's not available and I can't install it. Is there any other way I can do it? Any help is appreciated. Thanks!
I'm also curious to know what happens if I run fstrim on a device that doesn't support trim? Does it result in a no-op?
Yong zhu
(101 rep)
May 4, 2020, 10:34 PM
• Last activity: May 5, 2020, 04:28 AM
Showing page 1 of 20 total questions