Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
0 answers
44 views
Cannot open LibreOffice files on Samba share concurrently on Debian Linux - File locking issue
We have a Samba file server and use Debian Linux for both server and clients. Everything works fine. Except for one thing: If someone opens a LibreOffice document (mainly Calc / *.ods), nobody else can open the same file concurrently, not even "read only" or as a copy. If I try to open such a file,...
We have a Samba file server and use Debian Linux for both server and clients. Everything works fine. Except for one thing: If someone opens a LibreOffice document (mainly Calc / *.ods), nobody else can open the same file concurrently, not even "read only" or as a copy. If I try to open such a file, LibreOffice says it is locked by a certain named user or by an unknown user (the latter happens if I rm the ~lock. file). Then I can choose if I want to open the file read-only or if I want to work with a copy of the file. That's fine. But none of both options work! If I choose "open read only", nothing happens. If I choose "work with a copy", an empty Writer document opens up (although I tried to open an ods file with Calc). Next thing I tried: I can't even run md5sum document.ods! It says permission denied. So it's not just LibreOffice that cannot open the file for concurrent read access. LibreOffice seems to put an exclusive lock on the file when it is opened for editing. Nobody else can open the file until the first user closes the document. Any ideas what I could try? Preventing concurrent write access is reasonable. But why does it also block concurrent read access? And how can I disable that? getfacl output on the file server looks fine so far.
MrSnrub (145 rep)
Jun 25, 2025, 09:16 AM • Last activity: Jun 25, 2025, 10:59 AM
0 votes
1 answers
63 views
serial console hang in Linux kernel-5.10.188
I am working on an embedded Linux system (kernel-5.10.188), and use `/dev/ttyS2` as serial console and `ash` in `busybox` is the login shell. After logging in to system, I ran `top -d 1` in the serial console (I am using mobaxterm in Windows 11 to acess the serial console), it worked well. Then I cl...
I am working on an embedded Linux system (kernel-5.10.188), and use /dev/ttyS2 as serial console and ash in busybox is the login shell. After logging in to system, I ran top -d 1 in the serial console (I am using mobaxterm in Windows 11 to acess the serial console), it worked well. Then I closed PC's lid (Windows suspended itself). A few minutes later, I resumed the PC and found the serial connection in mobaxterm is down, I typed 'R' to re-connect the serial console, but there is NO output from the serial console. I login to the system through adb shell, and I got followings. * top -d 1 is process of PID, its status showed
Name:   top
Umask:  0022
State:  S (sleeping)
Tgid:   345
Ngid:   0
Pid:    345
PPid:   210
.....
voluntary_ctxt_switches:        51
nonvoluntary_ctxt_switches:     22
The last two lines showed the same value when cat /proc/345/status for several times. So the process is not running. By running cat /proc/345/stack, it showed following.
# cat /proc/345/stack
[] wait_woken+0x74/0x94
[] n_tty_write+0x480/0x4f0
[] file_tty_write.isra.36+0x1c8/0x358
[] vfs_write+0x3e8/0x4d8
[] ksys_write+0xe0/0x124
[] syscall_common+0x34/0x58
The process is waiting in vfs_write and n_tty_write (I think it is from something printf or puts from top utility). I can killed the top process with kill -9 345. But there is still NO response in the console, so I checked the login shell process. * Check the process of 210 (the login shell and the parent of top).
# cat /proc/210/status
Name:   sh
Umask:  0022
State:  S (sleeping)
Tgid:   210
Ngid:   0
Pid:    210
PPid:   1
......
voluntary_ctxt_switches:        236
nonvoluntary_ctxt_switches:     45

# cat /proc/210/stack
[] wait_woken+0x74/0x94
[] n_tty_write+0x480/0x4f0
[] file_tty_write.isra.36+0x1c8/0x358
[] vfs_write+0x3e8/0x4d8
[] ksys_write+0xe0/0x124
[] syscall_common+0x34/0x58
The login shell is also in vfs_write and not being scheduled. I have to kill -9 210 to bring back the login shell. I can definely reproduce this issue with Windows suspend/resume. I went through the long list of kernel's commits on tty, but I did NOT find the same issue or the fix. So what is the cause of this hang in serial console and how to fix it? Or where should I post this issue or bug for help?
wangt13 (631 rep)
Feb 21, 2025, 08:53 AM • Last activity: Feb 23, 2025, 01:53 AM
44 votes
15 answers
47490 views
How to make sure only one instance of a bash script runs?
A solution that does not require additional tools would be prefered.
A solution that does not require additional tools would be prefered.
Tobias Kienzler (9574 rep)
Sep 18, 2012, 11:04 AM • Last activity: Feb 9, 2025, 10:05 AM
0 votes
0 answers
30 views
Delayed systemd slock autolock
I've been playing with this autoslock.service file: ``` [Unit] Description=Lock the screen on suspend +Before=sleep.target [Service] User=garrett Environment=DISPLAY=:0 ExecStart=/usr/local/bin/slock [Install] WantedBy=sleep.target ``` It almost works. The problem is that there's a brief period when...
I've been playing with this autoslock.service file:
[Unit]
Description=Lock the screen on suspend
+Before=sleep.target

[Service]
User=garrett
Environment=DISPLAY=:0
ExecStart=/usr/local/bin/slock

[Install]
WantedBy=sleep.target
It almost works. The problem is that there's a brief period when the screen wakes up where it's contents are visible before slock is triggered. What do I need to do to fix it? I would think there'd be a way for slock to be invoked immediately prior to sleep instead of after waking up, but I haven't been able to crack it.
kmaximus (1 rep)
Jan 1, 2025, 10:25 PM
5 votes
2 answers
4243 views
zsh: locking failed for ~/.cache/zsh/zsh_history: file exists
I have a bind mount `~/.cache/zsh` folder between multiple hosts with `rw` and `defaults` when doing mount. When I start both machine and zsh trying to lock `zsh_history`, it gives error `zsh: locking failed for ~/.cache/zsh/zsh_history: file exists` on one of a machine. Google but seems no one got...
I have a bind mount ~/.cache/zsh folder between multiple hosts with rw and defaults when doing mount. When I start both machine and zsh trying to lock zsh_history, it gives error zsh: locking failed for ~/.cache/zsh/zsh_history: file exists on one of a machine. Google but seems no one got this kind mesg before, when and why zsh give this kind of mesg? How to make it happy?
raring-coffee20 (1855 rep)
Nov 7, 2019, 02:27 AM • Last activity: Dec 29, 2024, 02:07 AM
0 votes
0 answers
48 views
Linux kernel locking documentation. PREEMPT_RT caveats section question
In the start of this section we see next mentions: > local_lock on RT > > The mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a > few implications. For example, on a non-PREEMPT_RT kernel the > following code sequence works as expected: > > local_lock_irq(&local_lock); > raw_spin_lock(...
In the start of this section we see next mentions: > local_lock on RT > > The mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a > few implications. For example, on a non-PREEMPT_RT kernel the > following code sequence works as expected: > > local_lock_irq(&local_lock); > raw_spin_lock(&lock); > > and is fully equivalent to: > > raw_spin_lock_irq(&lock); > > On a PREEMPT_RT kernel this code sequence breaks because > local_lock_irq() is mapped to a per-CPU spinlock_t which neither > disables interrupts nor preemption. It is okay. First code example try lock irqs and hold raw spin lock and on non-PREEMPT_RT kernel it is equals to second section code. But, after last code the mention about PREEMPT_RT kernels says that local_lock_irq() does not work on PREEMPT_RT kernels, so on these kernels we cant use first code example instead second. So the continue of documentation should say something like next thing: "On PREEMPT_RT kernel you should use second code example strictly". BUT, documentation says next thing: > The following code sequence works perfectly correct on both PREEMPT_RT > and non-PREEMPT_RT kernels: > > local_lock_irq(&local_lock); spin_lock(&lock); What? On PREEMPT_RT kernel: typedef spinlock_t local_lock_t, and local_lock_irq mapped to __local_lock which does next things: disables migration and calls spin_lock with lock variable pointer of this cpu. After that we call spin_lock (again) with another lock variable. Why we should do this? I dont understand what does author want say me. Thanks. **UPD:** If no answer, then I will try. If lock and raw_lock same on non-PREEMPT_RT kernel then I think autor want to say that dont need use raw on PREEMPT_RT as it is extra whole locking for them, just use simple lock instead and all works as need in most cases. lock irq will disables migrate and holds this cpu lock variable, then spin_lock for general lock and we get ordered flow of execution. Correct me if I'm wrong.
nx4n (111 rep)
Nov 10, 2024, 12:16 PM • Last activity: Nov 12, 2024, 04:59 AM
1 votes
1 answers
2517 views
Function Lock. can't disable
I'm using a Feker Machinist 01. On both Manjaro and Linux Mint (both running cinnamon), the function keys are "locked" to those silly laptop features like enabling/disabling wifi, adjusting volume, adjusting screen contrast etc. I don't want any of this nonsense, i just want the function keys to beh...
I'm using a Feker Machinist 01. On both Manjaro and Linux Mint (both running cinnamon), the function keys are "locked" to those silly laptop features like enabling/disabling wifi, adjusting volume, adjusting screen contrast etc. I don't want any of this nonsense, i just want the function keys to behave like function keys. I note that this is not a problem on windows 10. On Win10 the Function keys behave as they should (eg F5 in firefox triggers a page refresh, rather than turning off my screen like it does on linux)
TheIronKnuckle (143 rep)
Nov 20, 2020, 09:59 AM • Last activity: Nov 4, 2024, 04:34 PM
0 votes
1 answers
59 views
Distributed locks of file-backed mmap()-ed memory over network block devices
There are various ways to access block devices over a network, like * nbd, alias network block device * iscsi, alias scsi over tcp * or simply we can mmap() files mounted in an nfs or cifs network share. I am now thinking about a process, mmap()-ing such block devices and then flock()-ing them or pa...
There are various ways to access block devices over a network, like * nbd, alias network block device * iscsi, alias scsi over tcp * or simply we can mmap() files mounted in an nfs or cifs network share. I am now thinking about a process, mmap()-ing such block devices and then flock()-ing them or part of them. Obviously, as they are in a network share, such memory locks can happen concurrently on multiple devices. What I am trying to do, is *to distribute and handle these concurrent locks even across the network*, i.e. if a lock happens on a node, it should affect also other nodes. It probably requires some RPC-based protocol, which is likely not trivial, but possible to develop. Does such a thing exist?
peterh (10448 rep)
Oct 16, 2024, 02:16 PM • Last activity: Oct 16, 2024, 03:48 PM
0 votes
1 answers
38 views
Is mmap holding a reference to the OFD specified by POSIX, or a Linux extension, and where is it documented?
I am using Open File Description (OFD) owned locks on Linux (`fcntl` with command `F_OFD_SETLK`). After locking a file, I memory mapped it, and closed the file descriptor. Another process tried to lock the same file, and was unable to do so until the first process unmapped the memory. It seems Linux...
I am using Open File Description (OFD) owned locks on Linux (fcntl with command F_OFD_SETLK). After locking a file, I memory mapped it, and closed the file descriptor. Another process tried to lock the same file, and was unable to do so until the first process unmapped the memory. It seems Linux, at least, keeps a reference to the open file description when a mapping is still active. POSIX.1-2024 documents that [mmap](https://pubs.opengroup.org/onlinepubs/9799919799/functions/mmap.html) adds a reference to the "file associated with the file descriptor". > The mmap() function shall add an extra reference to the file associated with the file descriptor fildes which is not removed by a subsequent close() on that file descriptor. This reference shall be removed when there are no more mappings to the file. A literal interpretation here would mean that the reference is to the file itself, but I don't know if that was the intent when the documentation was written. I would like to be able to rely on this behavior. Is there somewhere in POSIX where it's specified that I am missing? Could this be a defect report? If it's Linux exclusive, is there a reference anywhere that this was their intended behavior (and, possibly, their interpretation of the POSIX standard)? Test program (might require different feature test macros on other platforms):
-c
#define _GNU_SOURCE

#include 
#include 
#include 
#include 
#include 

int main()
{
	char filename[] = "/tmp/ofd-test.XXXXXX";
	int fd = mkstemp(filename);
	if (fd < 0) {
		perror("mkstemp");
		return 1;
	}

	fprintf(stderr, "created file '%s'\n", filename);

	struct flock lock = {
		.l_len = 0,
		.l_pid = 0,
		.l_whence = SEEK_SET,
		.l_start = 0,
		.l_type = F_WRLCK,
	};
	if (fcntl(fd, F_OFD_SETLK, &lock) < 0) {
		perror("first lock");
		return 1;
	}

	void *ptr = mmap(0, 1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
	if (ptr == MAP_FAILED) {
		perror("mmap");
		return 1;
	}

	close(fd);

	int newfd = open(filename, O_RDWR);
	if (newfd < 0) {
		perror("re-open");
		return 1;
	}

	lock.l_pid = 0;
	if (fcntl(newfd, F_OFD_SETLK, &lock) == 0) {
		fputs("locking after mmap worked\n", stderr);
		return 1;
	}
	perror("locking after mmap");

	munmap(ptr, 1024);

	lock.l_pid = 0;
	if (fcntl(newfd, F_OFD_SETLK, &lock) < 0) {
		perror("locking after munmap");
		return 1;
	}
	fputs("locking after munmap worked\n", stderr);

	if (unlink(filename) < 0) {
		perror("unlink");
		return 1;
	}

	return 0;
}
For me, this outputs:
created file '/tmp/ofd-test.Pyf3oj'
locking after mmap: Resource temporarily unavailable
locking after munmap worked
&#201;rico Rolim (35 rep)
Sep 24, 2024, 03:10 PM • Last activity: Sep 24, 2024, 04:08 PM
10 votes
3 answers
2685 views
How can I tell when a process has finished writing to a file?
I have a process which has been spawned from a shell. It is running as a background process and exporting a DB to a CSV file in `/tmp`. How can I tell when the background process has completed (finished / quit) or if the CSV file lock has closed? I plan to FTP the file to another host once it's writ...
I have a process which has been spawned from a shell. It is running as a background process and exporting a DB to a CSV file in /tmp. How can I tell when the background process has completed (finished / quit) or if the CSV file lock has closed? I plan to FTP the file to another host once it's written, but I need the complete file before I start the file transfer.
user34210 (101 rep)
Sep 25, 2014, 02:38 PM • Last activity: Aug 14, 2024, 03:07 AM
50 votes
6 answers
52537 views
Gedit won't save a file on a VirtualBox share: Text file busy
I have a text file that I can change using other applications (for example `openoffice`). But when I try to change and save it using `gedit`, I am getting error from `gedit`: Could not save the file /media/sf_Ubuntu/BuildNotes.txt. Unexpected error: Error renaming temporary file: Text file busy the...
I have a text file that I can change using other applications (for example openoffice). But when I try to change and save it using gedit, I am getting error from gedit: Could not save the file /media/sf_Ubuntu/BuildNotes.txt. Unexpected error: Error renaming temporary file: Text file busy the permission of BuildNotes.txt is as follow: -rwxrwx--- 1 root vboxsf 839 2012-10-26 12:08 BuildNotes.txt and user id is: m@m-Linux:/media/sf_Ubuntu$ id uid=1000(m) gid=1000(m) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),119(admin),122(sambashare),1000(m),1001(vboxsf) What is the problem and how I can fix it?
user654019 (2367 rep)
Oct 26, 2012, 11:15 AM • Last activity: Jul 4, 2024, 04:41 AM
434 votes
9 answers
1581216 views
How to get over "device or resource busy"?
I tried to `rm -rf` a folder, and got "device or resource busy". In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like [this one][1]. Although they're useful, I'm currently intereste...
I tried to rm -rf a folder, and got "device or resource busy". In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)
ripper234 (32413 rep)
Apr 13, 2011, 08:51 AM • Last activity: Jun 30, 2024, 09:53 PM
0 votes
1 answers
61 views
Inhibit scheduled job while package installation is in progress
On a Debian-based system I have a scheduled job (e.g. cron job, or systemd timer/service) which runs every 30 minutes. However, I don’t want that to happen concurrently with packages are being installed or updated. Package installation can happen manually or on a schedule, but in the latter case the...
On a Debian-based system I have a scheduled job (e.g. cron job, or systemd timer/service) which runs every 30 minutes. However, I don’t want that to happen concurrently with packages are being installed or updated. Package installation can happen manually or on a schedule, but in the latter case there is a considerable randomized delay. I could adapt the schedule for my job to not interfere with a potential package update (whether something is actually installed or not), and remember to disable the job when installing packages manually, and remember to enable it afterwards – but that is not really satisfactory. So I am looking for a reliable way to tell package installation is in progress, so my job can check for it and exit (or delay execution) if that is the case. If repository information is being updated or packages are being downloaded in the background concurrently with my job, that is not really an issue, but my job should not run while installation is happening (copying of files, configuration, pre/post-install scripts and similar). On OPNsense (which is FreeBSD based) the system updater acquires a lock on a particular file, so I have wrapped my job inside flock. If an update is in progress, my job would be skipped. If an upgrade were to be triggered while my job is running, presumably the upgrade would fail with a message indicating another update is in progress. I am wondering if apt on Debian has something similar, such as a lock file that I can check on. If so – is that mechanism exclusive to a particular package management frontend or would it work with all of the standard tools for .deb packages (e.g. dpkg, apt, aptitude, synaptic and the like)? I see that when Synaptic is open and I try to run sudo apt-get upgrade, I get: E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 1234 (synaptic) N: Be aware that removing the lock file is not a solution and may break your system. E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it? However, sudo flock -n /var/lib/dpkg/lock-frontend sleep 10 || echo File is locked succeeds (i.e. flock returns true, indicating that I have obtained the lock, sleep executes while echo does not) even while Synaptic is running (though not currently installing anything). Same behavior with /var/lib/dpkg/lock. So how can I obtain a “lock on package installation”?
user149408 (1515 rep)
May 26, 2024, 10:54 AM • Last activity: May 31, 2024, 03:29 PM
2 votes
1 answers
58 views
How does fnctl provide byte range locks?
`fnctl` (https://man7.org/linux/man-pages/man2/fcntl.2.html) supports locking a portion of a file (specified by start position and length). Behind the scenes, what does the algorithm look like? I'm primarily looking for information about which lock type(s) it uses, how it uses ranges (does it use a...
fnctl (https://man7.org/linux/man-pages/man2/fcntl.2.html) supports locking a portion of a file (specified by start position and length). Behind the scenes, what does the algorithm look like? I'm primarily looking for information about which lock type(s) it uses, how it uses ranges (does it use a prefix tree?), etc.
Novice User (1429 rep)
May 16, 2024, 04:11 AM • Last activity: May 16, 2024, 06:54 AM
1 votes
1 answers
3358 views
A VNC server is already running, but no lock file for that
Although there is no X11 lock on display 12, vncserver can not start that display number due to a "VNC server that is running on 12". # rm /tmp/.X12-lock rm: cannot remove ‘/tmp/.X12-lock’: No such file or directory # rm /tmp/.X11-unix/X12 rm: cannot remove ‘/tmp/.X11-unix/X12’: No such file or dire...
Although there is no X11 lock on display 12, vncserver can not start that display number due to a "VNC server that is running on 12". # rm /tmp/.X12-lock rm: cannot remove ‘/tmp/.X12-lock’: No such file or directory # rm /tmp/.X11-unix/X12 rm: cannot remove ‘/tmp/.X11-unix/X12’: No such file or directory # ps aux | grep Xvnc | grep :12 # # su - metal3 -c "vncserver -geometry 1300x900 :12" A VNC server is already running as :12 New 'hpc.test.com:10 (metal3)' desktop is hpc.test.com:10 Starting applications specified in /home/metal3/.vnc/xstartup Log file is /home/metal3/.vnc/hpc.test.com:10.log Any more lock file that I have missed for killing?
mahmood (1271 rep)
Jan 4, 2020, 03:20 PM • Last activity: Apr 1, 2024, 06:01 PM
19 votes
4 answers
18154 views
apt-get wait for lock release
If you are running `apt-get` commands on terminal and want to install stuff on the software center, the center says it waits until `apt-get` finishes. I wanted to know if it is possible to do the same but on the terminal, i.e., make `apt-get` on the terminal wait until the lock is released. I found...
If you are running apt-get commands on terminal and want to install stuff on the software center, the center says it waits until apt-get finishes. I wanted to know if it is possible to do the same but on the terminal, i.e., make apt-get on the terminal wait until the lock is released. I found [this link](https://askubuntu.com/questions/132059/how-to-make-a-package-manager-wait-if-another-instance-of-apt-is-running) , that uses `aptdcon` to install stuff. I would like to know if: - Is it really not possible to do with apt-get? - Is aptdcon compatible with apt-get, i.e., can I use both to install stuff without borking the system?
Camandros (493 rep)
Nov 12, 2015, 10:59 AM • Last activity: Feb 9, 2024, 11:07 AM
1 votes
1 answers
72 views
Unix-esque partial-file-locking mechanism
Linux and AFAIK most unixes expose the `flock` syscall for mandatory file-locking. My experience is admittedly limited with this, however am informed that it is kernel-enforced on the entire resource. But what if I wanted to only lock a part of a file mandatorily, such that read/writes to this resou...
Linux and AFAIK most unixes expose the flock syscall for mandatory file-locking. My experience is admittedly limited with this, however am informed that it is kernel-enforced on the entire resource. But what if I wanted to only lock a part of a file mandatorily, such that read/writes to this resource are permitted, as long as they don't cross a particular boundary or stop reading once it reaches an outer bound of the locked region. Is this possible? ## Edit: Possible Implementation A possibility for advisory partial locking may be via MMaped regions, where the memory is mapped into each reader's address-space, on the condition that the requested region does not hold a lock. This would be implemented entirely in user-space, and thus loses the advantages of kernel-enforced locking, but would certainly work
J-Cake (229 rep)
Sep 3, 2023, 12:33 PM • Last activity: Sep 3, 2023, 01:50 PM
1 votes
1 answers
1679 views
lock file not removed when trap is inside an "if"
I put my trap inside an `if`, run the script, and after that second execution, it warns that lock file is held (ok). But when I `kill -9` the running PID, the lockfile is not removed. When I move trap before the `if` (what you can see now commented below): 1. then when I `kill -9 PID`, lockfile is d...
I put my trap inside an if, run the script, and after that second execution, it warns that lock file is held (ok). But when I kill -9 the running PID, the lockfile is not removed. When I move trap before the if (what you can see now commented below): 1. then when I kill -9 PID, lockfile is deleted (ok) 2. but when I run additional executions the scripts, only first one warns because after this first-additional run, the lockfile is removed by trap on EXIT! How to get the trap inside if to remove the lockfile on kill -9 of the script?
-bash
lockfile=/tmp/localfile
#trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT KILL
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null;
then
    trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT KILL
    while true
    do
        ls -ld ${lockfile}
        sleep 1
    done
    rm -f "$lockfile"
    trap - INT TERM EXIT
else
    echo "Failed to acquire lockfile: $lockfile."
    echo "Held by $(cat $lockfile)"
fi
DonJ (371 rep)
Mar 22, 2018, 10:57 AM • Last activity: Aug 24, 2023, 09:09 AM
11 votes
1 answers
40123 views
NFS mount: Device or resource busy
I referred the following link, the solution works. https://unix.stackexchange.com/questions/11238/how-to-get-over-device-or-resource-busy The above solution works when you are manually deleting the file. But I have a python script that deletes the files (automated process). Sometimes I get "Device o...
I referred the following link, the solution works. https://unix.stackexchange.com/questions/11238/how-to-get-over-device-or-resource-busy The above solution works when you are manually deleting the file. But I have a python script that deletes the files (automated process). Sometimes I get "Device or resource busy error" when the script tries to delete the files. Consequently, my script fails. I don't know how to resolve this issue using my python script. **EDIT:** The script downloads the logs files from a log server. These files are then processed by my script. After the processing is done, the script deletes these log files. I don't think that there is anything wrong with the design. Exact Error: OSError: [Errno 16] Device or resource busy: '/home/johndoe/qwerty/.nfs000000000471494300000944'
Touchstone (211 rep)
Mar 1, 2017, 05:10 AM • Last activity: Jul 7, 2023, 06:50 AM
0 votes
0 answers
306 views
What's the best way to lock a directory from remove & edits?
I have a few directories that I want to lock. This lock should provide the following: 1) Disallow anyone (including myself) from removing from that directory without a password 2) Disallow anyone (including myself) from changing existing files in that directory and recursive directories without a pa...
I have a few directories that I want to lock. This lock should provide the following: 1) Disallow anyone (including myself) from removing from that directory without a password 2) Disallow anyone (including myself) from changing existing files in that directory and recursive directories without a password 3) It shouldn't prevent new files from being added by myself but should prevent it from others (but not root) I am using Operating System: CentOS Linux 7 (Core). How can I achieve this behavior?
David (173 rep)
Apr 20, 2023, 06:43 PM • Last activity: Apr 20, 2023, 06:51 PM
Showing page 1 of 20 total questions