Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
2 answers
91 views
How to defragment compressed btrfs files?
If I defragment files on btrfs with the command btrfs filesystem defrag --step 1G file Everything is fine. A `filefrag -v file` clearly show, the extent count significantly decreased. Things are very different if I deal with compressed files. First, filefrag gives a huge amount of extents: Filesyste...
If I defragment files on btrfs with the command btrfs filesystem defrag --step 1G file Everything is fine. A filefrag -v file clearly show, the extent count significantly decreased. Things are very different if I deal with compressed files. First, filefrag gives a huge amount of extents: Filesystem type is: 9123683e File size of file is 85942272 (20982 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 31: 607198.. 607229: 32: encoded 1: 32.. 63: 609302.. 609333: 32: 607230: encoded 2: 64.. 95: 609314.. 609345: 32: 609334: encoded 3: 96.. 127: 609326.. 609357: 32: 609346: encoded ... 648: 20928.. 20959: 704298.. 704329: 32: 704299: encoded 649: 20960.. 20981: 691987.. 692008: 22: 704330: last,encoded,eof file: 650 extents found Second, if btrfs filesystem defragment commands return on the spot, without error report - and with an unchanged filefrag output. My impression is that fragmentation of compressed files is not an issue on btrfs filesystems et all. However, my ears clearly show: yes it is an issue for me. So, how to defragment on btrfs compressed file? How could I even see, are they even continuous, but not their encoded ( == compressed ) extents, instead their compressed blocks on the hdd?
peterh (10448 rep)
May 12, 2025, 02:22 PM • Last activity: May 12, 2025, 06:29 PM
1 votes
1 answers
2425 views
BTRFS - How to reclaim unused space in each chunk used and compact the filesystem, basically defrag free space?
How to reclaim unused space from each chunk used by BTRFS filesystem? Let's say there are lots of chunks partially used from 10% to 50% utilised per chunk, how do I defragment those so that the 90% to 50% of free space per chunk is reclaimed? If possible this would also compact the data.
How to reclaim unused space from each chunk used by BTRFS filesystem? Let's say there are lots of chunks partially used from 10% to 50% utilised per chunk, how do I defragment those so that the 90% to 50% of free space per chunk is reclaimed? If possible this would also compact the data.
DanglingPointer (262 rep)
Nov 25, 2023, 04:51 AM • Last activity: Mar 20, 2025, 05:02 PM
0 votes
0 answers
50 views
ZFS send much slower than before
With the exact same devices (computer, external hard drive…), my ZFS backup (via `zfs send | zfs recv`) takes much longer than before. Now, it takes around 1.5 to 2.2mn to copy 1Go, but before I think to remember that it could easily copy multiple Go per mn (with around 100M/s if my memory is correc...
With the exact same devices (computer, external hard drive…), my ZFS backup (via zfs send | zfs recv) takes much longer than before. Now, it takes around 1.5 to 2.2mn to copy 1Go, but before I think to remember that it could easily copy multiple Go per mn (with around 100M/s if my memory is correct, so maybe 6G per mn if it's possible).
$ zpool list
NAME                SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zfs_pool           1.63T  1.22T   420G        -         -    47%    74%  1.00x    ONLINE  -
zpool_my_passport  1.81T  1.68T   135G        -         -    17%    92%  1.00x    ONLINE  -
(passport is the external hard drive (it is NOT SSD, but zfs_pool is an SSD)) I feel like the issue started when the external drive started to be full (I now need to remove old snapshots), but 17% of fragmentation does not seem that bad (but 47% seems worse no?). Any idea if fragmentation could be the issue here, and if I can somehow restore the old performances?
tobiasBora (4621 rep)
Jan 18, 2025, 11:51 AM • Last activity: Jan 18, 2025, 01:31 PM
0 votes
1 answers
244 views
Encrypted LUKS fs inside a file: sparse or not?
I have a LUKS encrypted file filled with around 160 GB of data that I use a lot. For safety, I created the file with 400 GB.  That is, of course, a lot of wasted space.  So I switched to a sparse file, basically following the advice [here][1], simply by creating the file with the seek opti...
I have a LUKS encrypted file filled with around 160 GB of data that I use a lot. For safety, I created the file with 400 GB.  That is, of course, a lot of wasted space.  So I switched to a sparse file, basically following the advice here , simply by creating the file with the seek option:
dd if=/dev/zero of= bs=1G count=0 seek=400
But then I thought: what happens if the file begins to be fragmented?  Usually this is not a problem since I don't have very big files, and when I do, they are media files that usually don't change.  But an encrypted thing that I use so frequently will probably get fragmented quite soon... So my question is: is there any true downside to using a sparse file instead of a fixed-sized file in my situation?  Is fragmenting really an issue? Do you have any further advice?
Luis A. Florit (509 rep)
Aug 8, 2024, 06:48 PM • Last activity: Aug 9, 2024, 10:09 AM
0 votes
2 answers
1361 views
Effects of copy (cp) vs cut (mv) on file systems
When you have to move a file to a different location in the same file system, you have two options: just cut-paste (or `mv`), or copy-paste (`cp`) and then delete the old copy. I'm wondering what are the effects on common Linux file systems (especially ext2/3/4) in terms of possible defragmentation...
When you have to move a file to a different location in the same file system, you have two options: just cut-paste (or mv), or copy-paste (cp) and then delete the old copy. I'm wondering what are the effects on common Linux file systems (especially ext2/3/4) in terms of possible defragmentation and long-term file system health and efficiency. In other words, no matter which option is faster (of course mv is faster than cp), if you wanted to keep your file system as clean and efficient as possible over time, should you prefer mv or cp/rm when moving a file? Does it even matter in modern file systems?
reed (109 rep)
Jul 5, 2021, 08:35 AM • Last activity: Jul 11, 2024, 05:58 AM
6 votes
1 answers
3595 views
Understanding main memory fragmentation and hugepages
I have a machine that is intended for general use and which I also used to run a QEMU virtual machine. Because the virtual machine should be as performant as possible, I want to back the VM memory with hugepages, ideally 1GB hugepages. The machine has 32GB of ram and I want to provide 16 to the VM....
I have a machine that is intended for general use and which I also used to run a QEMU virtual machine. Because the virtual machine should be as performant as possible, I want to back the VM memory with hugepages, ideally 1GB hugepages. The machine has 32GB of ram and I want to provide 16 to the VM. The problem is that during my normal use of the machine, I might need to use all 32GB, so allocating the 16G of hugepages at boot is not an option. To work around this I have a hook script that allocates the 16G of hugepages when the VM boots. As you might expect, for 1GB hugepages, this fails if the host machine has been used for any amount of time (it seems to work reliably with 2M hugepages though this is not ideal). What I don't understand is exactly why this is happening. For example, I can open several applications (browser window, code editor, etc just to force some fragmentation for testing), then close them so that only my desktop is open. My memory usages in this case is around 2.5G/32G. Is there really no way that the kernel can find 16 1G-pages of contiguous aligned memory, out of the remaining 30G of RAM, that seems like very high fragmentation. Furthermore, I can run $ sudo tee /proc/sys/vm/compact_memory <<<1 to try to defrag the RAM, but even then, I have never successfully allocated 16 1G hugepages for the VM. This in particular is really shocking to me, since after defragging only 2.5G of RAM the remaining 30G *still* isn't contiguous or aligned. What I'm misunderstanding about this process? Does this seem like expected behavior? Additionally, is there any way to check if compact_memory actually did anything? I don't see any output in dmesg or similar after running that command.
Max Ehrlich (111 rep)
Jun 20, 2018, 02:53 PM • Last activity: May 2, 2024, 09:44 PM
3 votes
0 answers
203 views
What is "external fragmentation" in a file system?
I'm using EXT4, but I think my question concerns all Unix/Linux file systems. [This 2022 answer to "Difference between fragment and extent in ext4"][1] states that: > External fragmentation occurs when you have related files all in one directory that are scattered all over the disk. If you are tryin...
I'm using EXT4, but I think my question concerns all Unix/Linux file systems. This 2022 answer to "Difference between fragment and extent in ext4" states that: > External fragmentation occurs when you have related files all in one directory that are scattered all over the disk. If you are trying to read every file in the directory, this can case as much performance issues as internal fragmentation does. The ext4fs algorithms also attempt to minimize external fragmentation by attempting to allocate blocks for files in the same cylinder group as other files in the same directory. I wonder if that's true, because: 1. I couldn't find any other source to corroborate that. Web search results on "external fragmentation" usually point to RAM fragmentation, while specifying "ext4 external fragmentation" brings up some answers, but usually old ( There are two types of fragmentation, i.e. internal and external. Internal fragmentation refers to the fact that a file system uses specific sizes for a block, say 4KB, so if you have a file which is only 1KB in size, it will be stored in one 4KB block, therefore wasting 3KB of the block. This can't really be avoided. > > External fragmentation is when the files are not layed out continuously, i.e. spread over different blocks which can be far apart from each others. Thus it takes the disk head more time to collect all pieces together and reconstruct the file. My opinion so far is that : - The previously quoted StackExchange answer from 2022 is completely wrong - The definition of the second quote is the right one: > External fragmentation is when the files are not layed out continuously, i.e. spread over different blocks which can be far apart from each others. - And there is no such thing as "attempting to allocate blocks for files in the same cylinder group as other files in the same directory" (excerpt from the first quote). Basically, if a FS (or an OS) attempted to group files of a same directory on the disk, it would conflict with the fact that usually a FS (at least in the case of EXT4) tries to surround a file with a lot of free space, to prevent file fragmentation in case of a future expansion of the file. Could someone please confirm that my conclusions are correct (and thus that the quoted Stack Exchange answer is wrong)? [EDIT] After some more research, I came to the conclusion that the terms "external" and "internal" fragmentation have never been formally defined in the context of file systems. A few sources refer to them in the sense used in this Arch Linux post from 2009 or this kernel.org wiki entry , while some (even fewer) sources refer to them like in this StackExchange post from 2022 .
ChennyStar (1969 rep)
Jan 2, 2024, 09:31 AM • Last activity: Jan 4, 2024, 07:00 AM
0 votes
0 answers
39 views
Is theire any specific caution to take before resize btrfs
### Current situation I have an almost full `/home` partition, with just 4.4Go remaining space, when my `/` is more than half empty. Both of the two partitions are in Btrfs, as you can see: ``` % lsblk -o NAME,FSTYPE,FSAVAIL,FSUSE%,MOUNTPOINT,SIZE -e7 NAME FSTYPE FSAVAIL FSUSE% MOUNTPOINT SIZE sda 2...
### Current situation I have an almost full /home partition, with just 4.4Go remaining space, when my / is more than half empty. Both of the two partitions are in Btrfs, as you can see:
% lsblk -o NAME,FSTYPE,FSAVAIL,FSUSE%,MOUNTPOINT,SIZE -e7
NAME   FSTYPE FSAVAIL FSUSE% MOUNTPOINT              SIZE
sda                                                238,5G
├─sda1 vfat      4,6G     0% /boot/efi               4,7G
├─sda2 swap                  [SWAP]                  4,8G
├─sda3 btrfs    67,0G    37% / 107,1G              107,1G
└─sda4 btrfs     4,4G    96% /home                   122G
### What I want to do Then naturaly I want to downsise the / partition to make more space to /home. ### The question Before downsizing / and upsizing /home I am naturally afraid. I know their is no defragmentation tool for Btrfs because the specific mechanism of Btrfs have no need, but I still a little bit incredulous. At least, is their a defragmentation analizis tool for Btrfs or something allowing me to know how safe the resizing will be? Is their any caution I can take before? Naturally I made a backup before. But if I can avoid to restore them it would be great.
fauve (1529 rep)
Dec 12, 2023, 10:05 AM
2 votes
2 answers
2157 views
How to atomically defragment ext4 directories
Fragmentation seems to create a lot of unnecessary seeks when traversing a directory tree on a HDD: # stat -c %F 00 01 02 directory directory directory # filefrag -v 00 01 02 Filesystem type is: ef53 File size of 00 is 12288 (3 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expe...
Fragmentation seems to create a lot of unnecessary seeks when traversing a directory tree on a HDD: # stat -c %F 00 01 02 directory directory directory # filefrag -v 00 01 02 Filesystem type is: ef53 File size of 00 is 12288 (3 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 0: 428351942.. 428351942: 1: 1: 1.. 2: 428352760.. 428352761: 2: 428351943: last,eof 00: 2 extents found File size of 01 is 12288 (3 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 0: 428351771.. 428351771: 1: 1: 1.. 2: 428891667.. 428891668: 2: 428351772: last,eof 01: 2 extents found File size of 02 is 12288 (3 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 0: 428351795.. 428351795: 1: 1: 1.. 2: 428352705.. 428352706: 2: 428351796: last,eof 02: 2 extents found e4defrag isn't able to defrag them # e4defrag -v 00 ext4 defragmentation for directory(00) [1/116] "00" File is not regular file [ NG ] So how do I defragment a directory? Not its contents, but the directory itself. The directories are in use, so it should be done atomically, just like defragmenting regular files does not interfere with their use.
the8472 (274 rep)
Mar 11, 2017, 10:35 AM • Last activity: May 5, 2023, 10:10 PM
6 votes
1 answers
445 views
I observe using `dd if=filein of=fileout` lead to filesystem fragmentation, what do I have to change?
There was a nice question that sadly got deleted while I was writing a rather extensive answer :( Not wanting to let that effort go to waste, let me paraphrase that question from the question text and comments: > I observe that using `dd` to overwrite files does increase fragmentation. I'm looking f...
There was a nice question that sadly got deleted while I was writing a rather extensive answer :( Not wanting to let that effort go to waste, let me paraphrase that question from the question text and comments: > I observe that using dd to overwrite files does increase fragmentation. I'm looking for an alternative to dd that doesn't lead to fragmentation. > > As an example for how that fragmentation happens: imagine a file which occupies the entire filesystem. Start overwriting it, you'll immediately see how the partition becomes completely "free", and you'll be able to write another file to it in the meantime. Blocks are allocated dynamically and there's zero guarantee older blocks will be reused when the file gets overwritten. > > I observe this behaviour on multiple file systems (ext2, 3 and 4, XFS, as well as FAT32 and NTFS) and on multiple OSes (win95 through modern Fedora)². > > I'm convinced this is independent of FS and OS. > > My main file system is a Bog standard ext4. FedoraROOT: 103874/1310720 files (0.2% non-contiguous), 1754833/5242880 blocks. Minimum overall fragmentation. --- note that *I* myself cannot observe this, I'm trusting the original asker on these fragmentation claims!
Marcus M&#252;ller (47107 rep)
Jun 11, 2022, 03:08 PM • Last activity: Jun 11, 2022, 06:38 PM
3 votes
0 answers
1127 views
How to get a Btrfs filesystem fragmentation count?
The **`btrfs check`** command output doesn't mention how many files are fragmented like **`e4defrag -c`** or **`e2fsck`** with **ext4** filesystems. What command can give the number of fragmented files on a **whole Btrfs filesystem** ?
The **btrfs check** command output doesn't mention how many files are fragmented like **e4defrag -c** or **e2fsck** with **ext4** filesystems. What command can give the number of fragmented files on a **whole Btrfs filesystem** ?
erik (51 rep)
Apr 7, 2022, 03:44 AM • Last activity: Apr 22, 2022, 12:51 PM
2 votes
2 answers
600 views
Will send & receive also defragment data?
I have heavily fragmented (and 90% full) btrfs partition on my laptop. I'd like to perform defragmentation with the help of identical spare hard drive. I already re-created partition table (GPT) and cloned non-btrfs file systems with the help of rsync. I use btrfs snapshots and so cannot simply rsyn...
I have heavily fragmented (and 90% full) btrfs partition on my laptop. I'd like to perform defragmentation with the help of identical spare hard drive. I already re-created partition table (GPT) and cloned non-btrfs file systems with the help of rsync. I use btrfs snapshots and so cannot simply rsync the contents of the partition (it wouldn't fit on the target drive) Would btrfs send & receive duplicate fragmentation of the files, or would it be equivalent to btrfs-aware rsync? I know that in the latter case it will not guarantee full defragmentation, but I hope get rid of 99% of it.
Adam Ryczkowski (5859 rep)
Apr 18, 2015, 05:08 PM • Last activity: Apr 4, 2021, 10:35 PM
-1 votes
1 answers
95 views
Do rolling releases inevitably cause higher fragmentation over time than point releases?
Let's compare *Debian Stable* as a candidate for `point releases` and *Arch Linux* as a candidate for `rolling releases`: **Does a rolling release by default cause higher fragmentation on the drive than a point release?**
Let's compare *Debian Stable* as a candidate for point releases and *Arch Linux* as a candidate for rolling releases: **Does a rolling release by default cause higher fragmentation on the drive than a point release?**
Dave (1046 rep)
Feb 1, 2018, 05:11 PM • Last activity: Feb 21, 2021, 08:16 AM
2 votes
2 answers
388 views
Meanings of defragmentation and when it is needed
There seem to be two meanings of defragmentation: - **Defragmentation** is a process that reduces the amount of fragmentation. Fragmentation occurs when the file system cannot or will not allocate enough contiguous space to store a complete file as a unit, but instead puts parts of it in gaps betwee...
There seem to be two meanings of defragmentation: - **Defragmentation** is a process that reduces the amount of fragmentation. Fragmentation occurs when the file system cannot or will not allocate enough contiguous space to store a complete file as a unit, but instead puts parts of it in gaps between other files. - When under Windows, if we try to release existing free space from an NTFS partition (to later create a new partition especially during dual boot installation of Ubuntu besides an exisiting Windows OS), we will have to use Windows tools to move all the files to one end of the partition, and leave free space at the other end of the partition as much as possible. I heard this is also called **defragmentation**. Alternatively, Linux tools such as gparted can release free space from an ntfs partition without **defragmentation** (in the sense of the previous paragraph, not in the sense of the first paragraph) the NTFS partition first. ### Questions: ### 1. I wonder if the two kinds of "defragmentation" above always happen together? 2. Does whether a file system (e.g. NTFS) needs defragmentation depend on - the OS under which it has been used (e.g. Windows or Linux), or - the file system type (e.g. NTFS) itself? 3. Are the answers to questions in 2 different for different meanings of defragmentation (as mentioned earlier)? For example, I heard that, - in Linux, defragmentation in the first sense isn't needed on an EXT4 partition, unless a partition is occupied more than 90%, because Linux always try to automatically defragment. - Releasing free space from a NTFS partition by using the Linux tool gparted doesn't need to do defragmentation on the NTFS partition in the second sense, and is it because Linux always automatically move all the files to one end of the partition as much as possible?
Tim (106420 rep)
Dec 29, 2014, 08:02 PM • Last activity: Nov 29, 2020, 05:11 PM
4 votes
1 answers
2152 views
How to defrag XFS file-system if xfs_fsr exits with "no improvement will be made"?
I am trying to defrag a heavily fragmented XFS file-system on a CentOS 6.6 machine: [root@server opt]# xfs_db -c frag -r /dev/md3 actual 598, ideal 215, fragmentation factor 64.05% However, when I attempt to launch the `xfs_fsr` utility, it exits with the message `No improvement will be made (skippi...
I am trying to defrag a heavily fragmented XFS file-system on a CentOS 6.6 machine: [root@server opt]# xfs_db -c frag -r /dev/md3 actual 598, ideal 215, fragmentation factor 64.05% However, when I attempt to launch the xfs_fsr utility, it exits with the message No improvement will be made (skipping): [root@server opt]# xfs_fsr -t 25200 /dev/md3 -v /opt start inode=0 ino=536871105 No improvement will be made (skipping): ino=536871105 How can I get this to defrag?
Chris (1157 rep)
Feb 24, 2015, 06:38 PM • Last activity: Aug 26, 2020, 11:10 AM
1 votes
1 answers
556 views
Identify if IO bottleneck is read or write
I use `ddrescue` to image failing disk to sparse files - often the files become highly fragmented (>50k fragments). I suspect that sometimes the imaging speed degrades because of the fragmentation. Is there way to detect if the slowness is because of the read to the source disk or the write to the t...
I use ddrescue to image failing disk to sparse files - often the files become highly fragmented (>50k fragments). I suspect that sometimes the imaging speed degrades because of the fragmentation. Is there way to detect if the slowness is because of the read to the source disk or the write to the target file?
Reinstate Monica (733 rep)
May 16, 2020, 09:54 PM • Last activity: May 17, 2020, 03:39 AM
2 votes
1 answers
579 views
Does BTRFS's compress logic apply when defragmenting?
When mounting a BTRFS filesystem with the `compression` option, BTRFS will [selectively compress files depending on whether they are deemed compressible or not](https://btrfs.wiki.kernel.org/index.php/Compression#incompressible). Does this same logic apply when defragmenting? Or does the following f...
When mounting a BTRFS filesystem with the compression option, BTRFS will [selectively compress files depending on whether they are deemed compressible or not](https://btrfs.wiki.kernel.org/index.php/Compression#incompressible) . Does this same logic apply when defragmenting? Or does the following force compression?:
btrfs filesystem defragment -r -czstd /data
# btrfs version
btrfs-progs v4.19
dippynark (337 rep)
Sep 4, 2019, 12:19 PM • Last activity: May 13, 2020, 08:43 PM
2 votes
0 answers
504 views
Simultaneous copy from multiple sources without fragmenting destination
I regularly need to copy large datasets from multiple smaller drives to a larger one. Lately I've been using a WD Easystore 12TB External USB 3.0 Hard Drive as my destination. Copying all the files in series takes about 3 days. The destination drive is idle the majority of the time, waiting for sour...
I regularly need to copy large datasets from multiple smaller drives to a larger one. Lately I've been using a WD Easystore 12TB External USB 3.0 Hard Drive as my destination. Copying all the files in series takes about 3 days. The destination drive is idle the majority of the time, waiting for source reads. I can get the copy time under 20 hours by running a cp from each source at the same time, but that left most of the files fragmented. There is a patch for cp that adds a preallocate option, but that only works on filesystems that support the fallocate system call and ntfs-3g does not. Rsync has a preallocate option, but it fails with "rsync: do_fallocate" "Operation not supported (95)", presumably for the same reason. I tried using dd with a block size larger than the file size, hoping that if the write didn't take place until the entire file was already in memory the allocation would be contiguous, but the files still ended up fragmented. I tried using ntfsfallocate to preallocate space for all the files (took about 12 hours for 23k files), but cp does not appear to use the existing allocation when overwriting a file. Is there a linux NTFS driver that supports fallocate for any distribution? Other suggestions?
Pascal (323 rep)
Feb 6, 2020, 04:11 AM • Last activity: Feb 19, 2020, 07:39 PM
0 votes
1 answers
776 views
How can I disable ext4's automatic defragmentation?
I'm running tests that require some files to be heavily fragmented. I came up with a method to generate fragmented files, but something is working in the background to defragment them. I'm using `hdparm --fibmap` and I see it start out with thousands of fragments, then a lot of disk IO later, I see...
I'm running tests that require some files to be heavily fragmented. I came up with a method to generate fragmented files, but something is working in the background to defragment them. I'm using hdparm --fibmap and I see it start out with thousands of fragments, then a lot of disk IO later, I see it have under a hundred fragments. Is it possible to disable this?
Daffy (465 rep)
Jan 24, 2020, 09:58 AM • Last activity: Jan 25, 2020, 04:22 AM
1 votes
0 answers
87 views
Completely defragmenting a FAT32 filesystem
Is there any Linux tool that allows defragmenting a FAT32 filesystem not on a best effort basis, but so that all files are guaranteed to be stored contiguously with no exceptions? Possibly writing the filesystem image in one go like mkisofs? Background: I store songs on an SD card. Apparently, my ca...
Is there any Linux tool that allows defragmenting a FAT32 filesystem not on a best effort basis, but so that all files are guaranteed to be stored contiguously with no exceptions? Possibly writing the filesystem image in one go like mkisofs? Background: I store songs on an SD card. Apparently, my car radio can't handle fragmented files at all and will stop playing after the first extent (typically a few seconds into the song). I verified this by running the filefrag util - only the files that occupy exactly one extent play completely.
rkreis (27 rep)
May 2, 2019, 08:47 PM • Last activity: May 3, 2019, 05:46 AM
Showing page 1 of 20 total questions