Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
2 answers
2491 views
Shell-/Bash-Script to delete old backup files by name and specific pattern
every our backup files of a database are created. The files are named like this: ```` prod20210528_1200.sql.gz pattern: prod`date +\%Y%m%d_%H%M` ````` The pattern could be adjusted if needed. I would like to have a script that: - keeps all backups for the last x (e.g. 3) days - for backups older tha...
every our backup files of a database are created. The files are named like this:
`
prod20210528_1200.sql.gz 
pattern: proddate +\%Y%m%d_%H%M
`` The pattern could be adjusted if needed. I would like to have a script that: - keeps all backups for the last x (e.g. 3) days - for backups older than x (e.g. 3) days only the backup from time 00:00 shall be kept - for backups older than y (e.g. 14) days only one file per week (monday) shall be kept - for backups older than z days (e.g. 90) only one file per month (1st of each month) shall be kept - the script should rather use the filename instead of the date (created) information of the file, if that it possible - the script should run every day Unfortunately, I have very little knowledge of the shell-/bash-script language. I would do something like this: ````` if (file today - (x + 1)) { if (%H_of_file != 00 AND %M_of_file != 00) { delete file } } if (file today - (y + 1)) { if (file != Monday) { delete file } } if (file today - (z + 1)) { if (%m_of_file != 01) { delete file } } Does this makes any sense for you? Thank you very much! All the best, Phantom
Phantom (143 rep)
May 28, 2021, 05:25 PM • Last activity: Jul 26, 2025, 04:04 AM
1 votes
1 answers
2219 views
Git - how to add/link subfolders into one git-repository directory
Assuming I have a file structure like this: ├── Project-1/ │   ├── files/ │   └── special-files/ ├── Project-2/ │   ├── files/ │   └── special-files/ └── Project-3/ ├── files/ └── special-files/ Now I want to create a Git repository, including all the `special...
Assuming I have a file structure like this: ├── Project-1/ │   ├── files/ │   └── special-files/ ├── Project-2/ │   ├── files/ │   └── special-files/ └── Project-3/ ├── files/ └── special-files/ Now I want to create a Git repository, including all the special-files folders. If it was files, I could create a hardlink ln ./Project-1/special-files ./Git-Project/special-files-1 and so on, so I would get: Git-Project/ ├── .git ├── .gitignore ├── special-files-1/ ├── special-files-2/ └── special-files-3/ Though hardlinks do not work with folders. Symlinks do not get handled by git. **Is there a way to achieve, collecting/linking these folders into a git repository-folder?**
nath (6094 rep)
Aug 5, 2021, 04:48 PM • Last activity: Jul 7, 2025, 01:01 PM
1 votes
2 answers
61 views
Can I remove some or all $HOME/.cache/at-spi2* directories?
I have read a description of [AT-SPI2](https://www.freedesktop.org/wiki/Accessibility/AT-SPI2/) but I cannot find any information about these hidden sockets. Looking for something else, I discovered many of these, for example: $ cd ~/.cache $ ls -agG at-spi2-1WN552 total 52 drwx------ 2 4096 May 5 0...
I have read a description of [AT-SPI2](https://www.freedesktop.org/wiki/Accessibility/AT-SPI2/) but I cannot find any information about these hidden sockets. Looking for something else, I discovered many of these, for example: $ cd ~/.cache $ ls -agG at-spi2-1WN552 total 52 drwx------ 2 4096 May 5 08:25 . drwx------ 289 45056 Jun 10 22:10 .. srwxrwxrwx 1 0 May 5 08:25 socket $ ls -agG at-spi2-AQJBD2 total 52 drwx------ 2 4096 Oct 25 2023 . drwx------ 289 45056 Jun 10 22:10 .. srwxrwxrwx 1 0 Oct 25 2023 socket The most recent is from yesterday, the oldest 2 years’ ago. Is it safe to remove some or all of them? My instinct is to set up a cron job to periodically remove all but the most recent. I am using Linux Mint. They certainly do survive reboots. I habitually shut down overnight. It always frustrates me to find unwanted files, even if they are empty. I think programmers should be more careful to clean up. lsof (without sudo) returns nothing. sudo lsof always fails with errors: lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. lsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc Output information may be incomplete. 1000 is my uid. /run/user/1000/gvfs is an empty directory. /run/user/1000/doc contains several empty directories, with hexadecimal names, they are all dated at the start of the Epoch (1/1/1970)./run/user/1000/ contains several directories with similar (like at-spi2-[HEX]) but different names to those in the title. ps -ef shows 3 processes, all owned by me and all started just after the last reboot: 5223 5115 0 11:33 ? 00:00:00 /usr/libexec/at-spi-bus-launcher --launch-immediately 5234 5223 0 11:33 ? 00:00:23 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 11 --address=unix:path=/run/user/1000/at-spi/bus_0 5261 1 0 11:33 ? 00:00:02 /usr/libexec/at-spi2-registryd --use-gnome-session I notice the reference to /run/user/1000/at-spi, but nothing to the .cache directories. ---- I was doing something else (I don't remember exactly what at that precise time), and I noticed a new directory has appeared, and sudo lsof is now working. Results of sudo lsof ~/.cache/at-spi2-A4VY72/socket COMMAND PID FD TYPE DEVICE SIZE/OFF NODE NAME xdg-deskt 17871 13u unix 0xffff92cebe3ee400 0t0 112169 ~/.cache/at-spi2-A4VY72/socket type=STREAM (LISTEN) xdg-deskt 17871 14u unix 0xffff92cebe3ed800 0t0 117083 ~/.cache/at-spi2-A4VY72/socket type=STREAM (CONNECTED)
Peter Bill (526 rep)
Jun 11, 2025, 03:24 PM • Last activity: Jun 19, 2025, 09:18 AM
141 votes
4 answers
92273 views
How do you do a dry run of rm to see what files will be deleted?
I want to see what files will be deleted when performing an `rm` in linux. Most commands seem to have a dry run option to show just such information, but I can't seem to find such an option for `rm`. Is this even possible?
I want to see what files will be deleted when performing an rm in linux. Most commands seem to have a dry run option to show just such information, but I can't seem to find such an option for rm. Is this even possible?
Cory Klein (19341 rep)
Feb 7, 2011, 08:09 PM • Last activity: Jan 1, 2025, 01:51 PM
444 votes
10 answers
790793 views
Remove all files/directories except for one file
I have a directory containing a large number of files. I want to delete all files except for file.txt . How do I do this? There are too many files to remove the unwanted ones individually and their names are too diverse to use * to remove them all except this one file. Someone suggested using rm !(f...
I have a directory containing a large number of files. I want to delete all files except for file.txt . How do I do this? There are too many files to remove the unwanted ones individually and their names are too diverse to use * to remove them all except this one file. Someone suggested using rm !(file.txt) But it doesn't work. It returns: Badly placed ()'s My OS is Scientific Linux 6. Any ideas?
Kantura (4925 rep)
Sep 5, 2014, 01:59 AM • Last activity: Nov 1, 2024, 06:21 PM
1 votes
1 answers
74 views
Find files by extension and replacing characters before and after the search pattern
I need to find a files by extensions and replace the random characters before the search pattern to the specified text (REPLACED), turn the numbers after this pattern into an increasing counter and move 'em to another folder. Examples below: find files by extension .EXT ``` /opt/files/QvmBIsB3_PATTE...
I need to find a files by extensions and replace the random characters before the search pattern to the specified text (REPLACED), turn the numbers after this pattern into an increasing counter and move 'em to another folder. Examples below: find files by extension .EXT
/opt/files/QvmBIsB3_PATTERN_77580.EXT
/opt/files/8iV8QhFwQos_PATTERN_77580.EXT
/opt/files/lgI6JUEh55za488_PATTERN_77580.EXT
change 'em and replace to another folder:
/opt/replaced/REPLACED_PATTERN_1.EXT
/opt/replaced/REPLACED_PATTERN_2.EXT
/opt/replaced/REPLACED_PATTERN_3.EXT
anc (11 rep)
Apr 16, 2024, 12:23 PM • Last activity: Apr 16, 2024, 09:45 PM
1 votes
0 answers
753 views
How to Free Up Space on Full overlay and /dev/vda1 Filesystems in a Linux Environment?
I am facing an issue where the overlay and /dev/vda1 filesystems on my server are both completely utilized, showing 100% usage. This scenario has led to operational challenges, and I am seeking advice on how to effectively free up space or manage storage to mitigate this issue. Here is the output of...
I am facing an issue where the overlay and /dev/vda1 filesystems on my server are both completely utilized, showing 100% usage. This scenario has led to operational challenges, and I am seeking advice on how to effectively free up space or manage storage to mitigate this issue. Here is the output of df -h for context: Filesystem Size Used Avail Use% Mounted on overlay 97G 97G 0 100% / tmpfs 64M 0 64M 0% /dev tmpfs 111G 0 111G 0% /sys/fs/cgroup /dev/vdb 393G 65G 329G 17% /root/easymaker tmpfs 222G 12K 222G 1% /dev/shm 198.18.32.36:/GJ_SHARE_FS11/8683766f-1a8c-4531-87ba-2370c3ff2ad3 20T 0 20T 0% /root/backup 198.18.32.7:/GJ_SHARE_FS9/5470219b-58ed-4a37-9de0-1788451ad4b4 49T 16M 49T 1% /root/data /dev/vda1 97G 97G 0 100% /etc/hosts tmpfs 222G 12K 222G 1% /run/secrets/kubernetes.io/serviceaccount As you can see, both the overlay and /dev/vda1 filesystems are at full capacity. I am currently running a cloud environment, which may be contributing to the rapid space utilization on these filesystems. I have looked into common solutions such as pruning unused Docker images and containers, but I am cautious and seeking guidance on best practices, especially considering the operational importance of the data and applications running on this server. Could anyone provide insights or recommended strategies for safely freeing up space or preventing such issues in Docker environments? Are there specific logs, temporary files, or data that can be safely removed or moved to other storage without disrupting ongoing operations? 1. Attempted to identify large files or directories consuming disk space using du -ah / | sort -rh | head -20. - Expected: To list the top 20 largest files or directories. - Actual: Received an error message "sort: write failed: /tmp/sortVOhsjQ: No space left on device." 2. Tried to remove large files or directories to free up disk space using rm -rf /overlay/*. - Expected: To delete files or directories within the specified path. - Actual: Received an error message indicating that the path does not exist. 3. Attempted to remove the /etc/hosts file to free up space on the /dev/vda1 filesystem. - Expected: To successfully remove the file. - Actual: Received an error message "rm: cannot remove '/etc/hosts': Device or resource busy." 4. Tried to update the package repositories using apt-get update. - Expected: To update the package repositories successfully. - Actual: Received errors due to lack of disk space, such as "Splitting up /var/lib/apt/lists/developer.download.nvidia.com_compute_cuda_repos_ubuntu2204_x86%5f64_InRelease into data and signature failed." 5. Attempted to clean up unused Docker resources using docker network rm cw-net. - Expected: To remove the specified Docker network. - Actual: Received an error message "bash: docker: command not found." 6. Tried to identify large files or directories within the /overlay directory using du -ah /overlay | sort -rh | head -n 20. - Expected: To list the top 20 largest files or directories within the specified path. - Actual: Received an error message "du: cannot access '/overlay': No such file or directory." Overall, despite various attempts to free up disk space and identify large files or directories, encountered limitations due to lack of available disk space and inability to execute certain commands. Additional solutions or assistance may be required to address the underlying issue effectively.
babel AI (11 rep)
Apr 6, 2024, 04:35 PM • Last activity: Apr 6, 2024, 04:39 PM
0 votes
1 answers
894 views
How can I add custom entries to "Create New" on KDE Plasma?
I'm using KDE Plasma, and frequently need to package file sets into tar.gz or zip. This usually requires opening Ark, creating a new file, and working from there, which is a run-around. It would be better if I could right-click in the Dolphin workspace for a directory, go the "Create New..." submenu...
I'm using KDE Plasma, and frequently need to package file sets into tar.gz or zip. This usually requires opening Ark, creating a new file, and working from there, which is a run-around. It would be better if I could right-click in the Dolphin workspace for a directory, go the "Create New..." submenu, and pick this file type out; but as of the moment, the only options are: Folder... Text File... LibreOffice Calc/Draw/Impress/Writer... Link to Location/File/Directory/Application... This feels very much like something I should be able to customize, but I unfortunately have no idea where to look for the option. Does anyone know how I can add additional file types to this menu?
Michael Macha (323 rep)
Mar 15, 2024, 05:22 PM • Last activity: Mar 22, 2024, 10:54 AM
0 votes
0 answers
180 views
Shell script to do 'cdo' operation on multiple Netcdf file embedded in multiple folders
I have a directory on my Linux machine, let's name it `MYDIR`, with a number of folders labeled based on the years (1982, 1983, 1984, ..., 2022). Whithin each sub-folder, there are other folders named according to the months (01, 02, 03, ..., 12) and several NetCDF daily data inside each of these fo...
I have a directory on my Linux machine, let's name it MYDIR, with a number of folders labeled based on the years (1982, 1983, 1984, ..., 2022). Whithin each sub-folder, there are other folders named according to the months (01, 02, 03, ..., 12) and several NetCDF daily data inside each of these folders as well:
MYDIR [directory]
--1982[folder]   
----01[folder]
----02[folder]
----03[folder]
----12[folder]   
--1982[folder]
----01[folder]
----02[folder]
----12[folder]
I want to perform 2 operations on each file in all the folders. The operations are defined below.
cdo mergetime *.nc outfile.nc;
cdo ymonmean outfile.nc L4_GHRSST-SSTfnd_1982_01.nc
Finally, I want to save all L4_GHRSST-SSTfnd_1982_01.nc, L4_GHRSST-SSTfnd_1982_02.nc, ...,L4_GHRSST-SSTfnd_2022_01.nc, ..., L4_GHRSST-SSTfnd_2022_12.nc in a different folder named monmean in MYDIR. Can someone help me create a shell script which can perform the mentioned process?
Farshid Daryabor (1 rep)
Jan 12, 2024, 02:11 PM • Last activity: Jan 12, 2024, 02:31 PM
0 votes
2 answers
82 views
How can I generate a cumulative plot where I can view the number of files by the last modified date?
I have a large (millions) number of files in a folder, its subfolders and its subsubfolders that were modified over the past 15 years. I'd like to generate a cumulative plot where I can view the number of files by the last modified date, e.g: [![enter image description here][1]][1] How can I generat...
I have a large (millions) number of files in a folder, its subfolders and its subsubfolders that were modified over the past 15 years. I'd like to generate a cumulative plot where I can view the number of files by the last modified date, e.g: enter image description here How can I generate a cumulative plot where I can view the number of files by the last modified date? If that matters, I use Ubuntu.
Franck Dernoncourt (5533 rep)
Dec 24, 2023, 02:16 PM • Last activity: Dec 25, 2023, 03:25 PM
-1 votes
1 answers
656 views
How to rename files in Linux like the ren command in Windows
I need to rename a bunch of files (more than 100) on an Ubuntu system, and want to know how to I do that when the pattern of the files is something like "Filename_01.jpg" to "NameOfFile_01.jpg" In Windows, I would type: ren Filename_*.jpg NameOfFile*.jpg Because of the convoluted way the various com...
I need to rename a bunch of files (more than 100) on an Ubuntu system, and want to know how to I do that when the pattern of the files is something like "Filename_01.jpg" to "NameOfFile_01.jpg" In Windows, I would type: ren Filename_*.jpg NameOfFile*.jpg Because of the convoluted way the various commands I have found (rename, mmv, etc.) work, and the syntax examples, I can't make heads or tails of those commands. I don't need to full explanation of how the command works, I just need the exact syntax to do this.
Ed Swafford (111 rep)
Sep 23, 2023, 07:33 PM • Last activity: Sep 24, 2023, 10:10 AM
10 votes
5 answers
11387 views
How to disable trash can in Thunar/XFCE?
I found myself always holding Shift when I delete a file with Thunar (the XFCE file manager). When I was using Windows I was always disabling "recycle bin" immediately after installation. I've looked for similar option in Thunar settings but had no luck finding it. Do you happen to know a way?
I found myself always holding Shift when I delete a file with Thunar (the XFCE file manager). When I was using Windows I was always disabling "recycle bin" immediately after installation. I've looked for similar option in Thunar settings but had no luck finding it. Do you happen to know a way?
Ivan (18358 rep)
Oct 14, 2012, 06:16 PM • Last activity: Sep 6, 2023, 10:10 PM
4 votes
0 answers
196 views
Cygwin inheritance of Windows's security for cross-account file creation/modificaton?
On a Windows 10 laptop, I have an administrator account `Admin`, a non-administrator account `User.Name`, and the default `Public` account. From either `Admin` or `User.Name` accounts, files created in the `~/` tree have read/write permission for the *user* but not the *group*. The command `ls -l` s...
On a Windows 10 laptop, I have an administrator account Admin, a non-administrator account User.Name, and the default Public account. From either Admin or User.Name accounts, files created in the ~/ tree have read/write permission for the *user* but not the *group*. The command ls -l shows the group to be None. In the following examples, the commands were issued in a different order than shown here, so the time stamps aren't very informative. Example from ~/tmp of User.Name account: User.Name@Laptop-Hostname ~/tmp $ echo "Hello" > Test.txt User.Name@Laptop-Hostname ~/tmp $ zip Test.zip Test.txt adding: Test.txt (stored 0%) User.Name@Laptop-Hostname ~/tmp $ ls -l Test.{txt,zip} -rw------- 1 User.Name None 6 2023-08-11 00:42 Test.txt -rw------- 1 User.Name None 172 2023-08-11 01:05 Test.zip Example from ~/tmp of Admin account: Admin@Laptop-Hostname ~/tmp $ echo "Hello" > Test.txt Admin@Laptop-Hostname ~/tmp $ zip Test.zip Test.txt adding: Test.txt (stored 0%) Admin@Laptop-Hostname ~/tmp $ ls -l Test.{txt,zip} -rw------- 1 Admin None 6 2023-08-11 00:44 Test.txt -rw------- 1 Admin None 172 2023-08-11 01:02 Test.zip From account User.Name or Admin, however, if I create files in the /c/Users/Public/ tree, both the user *and* the group have read/write permission. Example from ~/tmp of the User.Name account: User.Name@Laptop-Hostname ~/tmp $echo "Hello" > /c/Users/Public/tmp/TestFromUser.Name.txt User.Name@Laptop-Hostname ~/tmp $zip /c/Users/Public/tmp/TestFromUser.Name.zip Test.txt adding: Test.txt (stored 0%) User.Name@Laptop-Hostname ~/tmp $ls -l /c/Users/Public/tmp/TestFromUser.Name.{txt,zip} -rw-rw----+ 1 User.Name None 6 2023-08-11 00:52 /c/Users/Public/tmp/TestFromUser.Name.txt -rw-rw----+ 1 User.Name None 172 2023-08-11 00:53 /c/Users/Public/tmp/TestFromUser.Name.zip Example from ~/tmp of the Admin account: Admin@Laptop-Hostname ~/tmp $ echo "Hello" > /c/Users/Public/tmp/TestFromAdmin.txt Admin@Laptop-Hostname ~/tmp $ zip /c/Users/Public/tmp/TestFromAdmin.zip Test.txt adding: Test.txt (stored 0%) Admin@Laptop-Hostname ~/tmp $ ls -l /c/Users/Public/tmp/TestFromAdmin.{txt,zip} -rw-rw----+ 1 Admin None 6 2023-08-11 00:47 /c/Users/Public/tmp/TestFromAdmin.txt -rw-rw----+ 1 Admin None 172 2023-08-11 00:47 /c/Users/Public/tmp/TestFromAdmin.zip I confirmed the more open permissions by creating a zip file /c/Users/Public/tmp/Test.zip from the User.Name account. I can subsequently use Cygwin's zip command to add files to /c/Users/Public/tmp/Test.zip from the Admin account. Furthermore, I can subsequently use Cygwin's zip command add files to /c/Users/Public/tmp/Test.zip from the User.Name account. This is the behaviour that I find most convenient, but since I'm not using Windows Explorer, I wonder ***how Cygwin knows to make files in the /c/User/Public file tree readable and writable by both the user and the group?*** Where in the documentation should I focus on to understand this? I looked through the documentation on [POSIX accounts, permission, and security](https://www.cygwin.com/cygwin-ug-net/ntsec.html) , but frankly much of it delves into a lot of detail that I don't have the grounding to understand. I was wondering whether there might be a simpler high-level description of behaviour, such as "Cygwin interacts with Windows at such-and-such a level to inherit the security behaviour for cross-account creation and modification of files". A concise expression of the designed-for behaviour would help users understand what to expect under different cross-account operations. # Troubleshooting To keep the troubleshooting manageable, I note that the difference seems to be bewteen files written in a user's own file space versus Public's file space. Therefore, I compare User.Name writing to User.Name's file space versus User.Name writing to Public's file space, i.e., I dispense with the comparison with Admin. Based on *G-Man's* suggestions, I tested whether the different permissions were due to writing files in the Cygwin file tree rooted at /home rather than the larger Windows file tree rooted at /c/Users. It appears to be unrelated, which confirms that the different permissions are due to writing across to the Public account rather than writing to the /c/Users tree vs. the /home tree. User.Name@Laptop-Hostname ~/tmp $ # Permissions of one's Cygwin home directory. $ # Not sure how relevant this is. $ ls -ld ~ drwxr-xr-x 1 User.Name None 0 Aug 12 13:13 /home/User.Name User.Name@Laptop-Hostname ~/tmp $ # Permissions of one's own current directory. $ # Not sure how relevant this is. $ ls -ld ~/tmp drwx------ 1 User.Name None 0 Aug 12 02:20 /home/User.Name/tmp User.Name@Laptop-Hostname ~/tmp $ # Permissions of file created in one's own Windows file tree. $ # Only "User" has read/write access. $ ls > /c/Users/User.Name/Documents/List.txt $ ls -l /c/Users/User.Name/Documents/List.txt -rw-------+ 1 User.Name None 414 2023-08-12 13:19 /c/Users/User.Name/Documents/List.txt User.Name@Laptop-Hostname ~/tmp $ # Permissions of file created in Public's Windows file tree $ # Both "User" and "Group" have read/write access. $ ls > /c/Users/Public/Documents/List.txt $ ls -l /c/Users/Public/Documents/List.txt -rw-rw----+ 1 User.Name None 414 2023-08-12 13:20 /c/Users/Public/Documents/List.txt I was also advised to apply getfacl to the folders containing the files being compared. Here are the results of getfacl for a user's Cygwin "tmp" folder, a user's Windows "Documents" folder, and Public's "Documents" folder: User.Name@Laptop-Hostname ~/tmp $ # FACL for user's Cygwin "tmp" folder $ getfacl ~/tmp # file: /home/User.Name/tmp # owner: User.Name # group: None user::rwx group::--- other::--- default:user::rwx default:group::r-x default:other::r-x User.Name@Laptop-Hostname ~/tmp $ # FACL for user's Windows "Documents" folder $ getfacl /c/Users/User.Name/Documents/ # file: /c/Users/User.Name/Documents/ # owner: User.Name # group: None user::rwx group::--- group:SYSTEM:rwx #effective:--- group:Administrators:rwx #effective:--- mask::--- other::--- default:user::rwx default:group::--- default:group:SYSTEM:rwx #effective:--- default:group:Administrators:rwx #effective:--- default:mask::--- default:other::--- User.Name@Laptop-Hostname ~/tmp $ # FACL for Public's Windows "Documents" folder $ getfacl /c/Users/Public/Documents/ # file: /c/Users/Public/Documents/ # owner: SYSTEM # group: SYSTEM user::rwx group::--- group:BATCH:rwx group:INTERACTIVE:rwx group:SERVICE:rwx group:Administrators:rwx mask::rwx other::--- default:user::rwx default:user:SYSTEM:rwx default:group::--- default:group:BATCH:rwx default:group:INTERACTIVE:rwx default:group:SERVICE:rwx default:group:Administrators:rwx default:mask::rwx default:other::--- I read up about FACLs [here](https://web.archive.org/web/20121022035645/http://vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html) . I note that "named users" and "named groups" *both* belong to the "group class". The *effective* permissions are the minimum (bit-wise ANDing) of the named user/group permission bits with a 3-bit "mask". This corroborates with the presence of #effective:--- above. According to the text, the group permissions from ls -l are those of the *mask*, which may be more or less permissive than any specific named user/group. Counterintuitively, the *owning* group's permissions are *not* ANDed with the mask (if I understood the text correctly). I make two assumptions in interpreting the text: (1) that the "owning" group is the group to which the owner user belongs and (2) the group None corresponds to the FACL entries leading with group::. The first getfacl above is for /home/User.Name/tmp. The default entries explain why newly created files have user read/write permissions, but *not* why new files lack execute permission (x). Furthermore, the default entries are inconsistent with the fact that new files lack group/other read and execute permissions. The next getfacl is for /c/Users/User.Name/Documents/. The default --- entries *do* explain why group/other have no permissions for newly created files. The third and final getfacl is for /c/Users/Public/Documents/. The default:user::rwx entry does explain why a user such as User.Name has read/write access to a file created by User.Name -- *assuming that user refers to the owner/creator of the file rather than the directory.* The default:group:Administrators:rwx entry also explains why the Admin account could add to a Zip file created by User.Name. A confirmation of the correctness of this reasoning would be appreciated (and would constitute an answer, if one wishes to post it as the answer).
user2153235 (467 rep)
Aug 9, 2023, 05:07 PM • Last activity: Aug 14, 2023, 04:02 AM
4 votes
3 answers
1668 views
Renaming files in Linux using perl scripting
I have a set of files with prefix, say "pre_", on a Linux machine and I just want to rename all of these files by removing that. Here is the perl code I wrote it doesn't throw any errors, but the work is not done. #!/usr/bin/perl -w my @files = `ls -1 | grep -i \"pre_.*\"`; foreach $file ( @files )...
I have a set of files with prefix, say "pre_", on a Linux machine and I just want to rename all of these files by removing that. Here is the perl code I wrote it doesn't throw any errors, but the work is not done. #!/usr/bin/perl -w my @files = ls -1 | grep -i \"pre_.*\"; foreach $file ( @files ) { my @names = split(/pre_/, $file); my $var1 = $names; 'mv "$file" "$var1"'; }
Sasi Pavan (51 rep)
Aug 9, 2023, 06:51 AM • Last activity: Aug 10, 2023, 05:59 AM
0 votes
2 answers
249 views
Removing certain numbers from batch of filenames
I have a batch of files that contain the numbers 21 to 29 in the middle of the file name. As I do not want to remove that number for hundreds of files by hand I tried using Terminal: ```sh for f in *; do for x in {21..29}; rename=`echo "$f" | sed 's/\$x//'` mv "$f" "$rename"; done; ``` However, noth...
I have a batch of files that contain the numbers 21 to 29 in the middle of the file name. As I do not want to remove that number for hundreds of files by hand I tried using Terminal:
for f in *; 
do for x in {21..29};
rename=echo "$f" | sed 's/\$x//'
mv "$f" "$rename"; done;
However, nothing happens. If I replace the "$x" with the letter "a" just for fun on a test folder with a few of the files, the first letter "a" is actually removed, but for the numbers it does not work. What am I doing wrong? I'd appreciate any help very much! Thank you!
Michael (1 rep)
May 26, 2023, 03:36 PM • Last activity: May 26, 2023, 06:16 PM
1 votes
1 answers
47 views
Paste files with the same names from directories and subdirectories to another directory
I'm looking for a way to paste files with the same names, contained in directories and subdirectories, to another directory with the same subdirectory structure. For instance : ``` Dir1/a/a/file1 Dir2/a/a/file1 ``` Those would be pasted into this directory : ``` Dir3/a/a/file1 ``` Now I found on sta...
I'm looking for a way to paste files with the same names, contained in directories and subdirectories, to another directory with the same subdirectory structure. For instance :
Dir1/a/a/file1
Dir2/a/a/file1
Those would be pasted into this directory :
Dir3/a/a/file1
Now I found on stackexchange a piece of zsh code that I modified to get the following :
#!/bin/zsh

typeset -A files
for file (dir*/*/*/*(nN)) files[$file:t]+=$file$'\0'
for file (${(k)files}) paste -d "\0" ${(0)files[$file]} > outputDir/*/*/$file
This doesn't work, as the code doesn't understand the "*" after outputDir. If I set a precise set of subdirectories for instance ouputDir/a/a/$file it works like a charm. I don't really know zsh as I discovered it with this piece of code. How could I do to keep the same subdirectories structure for the output dir ? Thank you for your help. Regards
Mathias Paicheler (11 rep)
Mar 17, 2023, 04:58 PM • Last activity: Mar 19, 2023, 12:12 PM
1 votes
2 answers
146 views
rename first two periods to hyphens
I have the following command to change periods to hyphens in file names recursively, so long as they have another period both before and after them: find path/to/dir -depth -type f -name '*.*' -exec rename -n -d 's/(?<=.)\.(?=.*\.)/-/g' {} + However it cant be used effectively on all filenames I wis...
I have the following command to change periods to hyphens in file names recursively, so long as they have another period both before and after them: find path/to/dir -depth -type f -name '*.*' -exec rename -n -d 's/(?<=.)\.(?=.*\.)/-/g' {} + However it cant be used effectively on all filenames I wish to adjust, and needs to be rewritten according to the following rules: 1. Renaming only occurs on filenames that start with a number. 2. Hidden files do not get changed. 3. Only the first two periods in every filename get changed. Eg: 2020.12.06_name123.ext.xmp becomes 2020-12-06_name123.ext.xmp while name123.ext.xmp remains unchanged. How to solve? Running linux mint cinnamon 21
ItHertz (97 rep)
Feb 10, 2023, 12:22 PM • Last activity: Feb 11, 2023, 09:43 AM
1 votes
0 answers
444 views
Whenever I commit and push a file to github, a duplicated ending with ~ appears in my local and remote repository
I am a beginner in Linux and I am still navigating my way aroung the terminal and file manipulation. Whenever I commit and push a file to my github, a duplicate is created alongside it. Can I delete the duplicate without any implications? Also how do I prevent this from happening again? An example i...
I am a beginner in Linux and I am still navigating my way aroung the terminal and file manipulation. Whenever I commit and push a file to my github, a duplicate is created alongside it. Can I delete the duplicate without any implications? Also how do I prevent this from happening again? An example is I push "filename1" and "filename1~" appears alongside it in both local and remote repositories. And filename1~, the duplicate, is always empty though filename1, the original, contains text.
Omari (11 rep)
Feb 9, 2023, 04:06 PM
0 votes
0 answers
38 views
Is there a useful utility for managing multiple "alternatives" of a given file?
The example I'm going to give here is a simple media file conundrum. In this example, say I have multiple versions of a given audio file: ``` SomeSongLibrary/ awesome-song-compressed.mp3 awesome-song-lossless.flac awesome-song-raw-project-file.binaryblob ``` I also have a media playing application (...
The example I'm going to give here is a simple media file conundrum. In this example, say I have multiple versions of a given audio file:
SomeSongLibrary/
     awesome-song-compressed.mp3
     awesome-song-lossless.flac
     awesome-song-raw-project-file.binaryblob
I also have a media playing application (e.g. Jellyfin, or Quodlibet, etc.) that is really only interested in one of these files at a given point in time. For example, Jellyfin has no ability to glob ignore specific files, so our given awesome song will show up twice (assuming it doesn't also grab that binaryblob by mistake, which can occur I've found!). That's where my issues begin and the desire to manage multiple "forks" of an individual file begin to show themselves. So the question becomes what are the means to solving this problem in a user-friendly way (cli is accepted)? The easiest but clumsiest way I found to solve this problem is to use a hidden folder with symlinks to the real content -- but obviously the file extension wouldn't be able to match the desired link. This example would look like:
SomeSongLibrary/
     awesome-song-link # symlink, points to files inside swap
     .awesome-song-swap/
          awesome-song-compressed.mp3
          awesome-song-lossless.flac
          awesome-song-raw-project-file.binaryblob
This is obviously error prone without a utility application, as for every file you'd need to make proper links. Additionally, you have to really hope that the given unix application is not at all dependent on the file extension to determine filetype (I would hope this is the case, but you cannot guarantee it) and also that the application is smart enough to skip hidden folders during the media scanning process. The next solution I thought of was using a git project for each of these folders with forks for each collection of filetypes or content. So you would make a git repo for SomeSongLibrary, create a branch in SomeSongLibrary for each of the types of files (mp3, flac, etc.) and then switch between those forks when you want access to different versions of a given file. This is great in theory, but also terrible since git doesn't really love dealing with binary files. While the cost of the binary files isn't much worse than just having multiple copies at the first place, this is not the case when you consider files that change (e.g. tags of awesome-song-compressed.mp3 being updated.) Additionally, I don't really need all the cruft associated with a full versioning system. Lastly, it would be nice to manage individual files in a folder, not the whole folder at once (so different files can be different versions.) So it might be a long shot, but has anyone ever had a problem like this and come up with a clever solution (script or full blown application?) Additionally, is this even a good idea or does it go against unix/posix file system standards?
TheYokai (143 rep)
Jan 9, 2023, 10:48 PM
2 votes
6 answers
11147 views
Renaming files using regex
I have a number of files in a folder on a Linux machine with the following names: 11, 12, 13, 14, 15, 21, 22, 23, 24, 25, 31, 32, 33, 34, 35 I would like to use regex in order to rename with the .inp extension I tried mv * *.inp mv: target '*.inp' is not a directory which provided an error. I also t...
I have a number of files in a folder on a Linux machine with the following names: 11, 12, 13, 14, 15, 21, 22, 23, 24, 25, 31, 32, 33, 34, 35 I would like to use regex in order to rename with the .inp extension I tried mv * *.inp mv: target '*.inp' is not a directory which provided an error. I also tried using the regex instead of the *. So, I understand that mv is used to move files around. I also got the idea that perhaps I could use ./*.inp to force mv to write in the same folder but it failed. So, apart from not understading correctly how mv works, how would I proceed to have have this done with mv?
Strelok (123 rep)
Feb 1, 2022, 01:44 PM • Last activity: Dec 16, 2022, 12:24 PM
Showing page 1 of 20 total questions