Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
1
answers
43
views
flock not working between forks on the same fd
I am working on a process that forks several times. To debug it, I am using a debug file for which I open a fd and which stays the same for all child forks. Then I have a function print_debug, that will print to that fd. Now when the main process and the childs are printing at the same time, the out...
I am working on a process that forks several times. To debug it, I am using a debug file for which I open a fd and which stays the same for all child forks. Then I have a function print_debug, that will print to that fd.
Now when the main process and the childs are printing at the same time, the output is intertwined in the debug file. I tried to solve that via a
flock(fd, LOCK_EX)
, but that seems to have no effect. Here is an excerpt from the print_debug function:
void print_debug(int fd, char *msg)
{
flock(fd, LOCK_EX);
print_debug2("... LOCK ACQUIRED ...");
... printing msg ...
flock(fd, LOCK_UN);
}
Now when several forks print at the same time, the output looks like this:
--12692-- ... LOCK ACQUIRED ...
--12692-- fork no wait with fd_in 4, fd_out 1 and fd_close
----121269694-2-- - ..... . LOLOCKCK A ACQCQUIUIRERED D .....
.
----121269694-2-- - exriecghutt e sitrdeee o
f pipe started
--12694---- 12.69..2- - LO..CK. ALOCQCUIK RACED Q..UI.
RE--D 12..69.4-
- --fd12 69ou2-t- ifs orck urwreaintt
ly: 1
----121269692-4-- - ...... L LOCOCK K ACACQUQUIRIREDED . .....
----121269692-4-- - fofdrk o nuto owan itcm wdit ch atfd i_is n cu0,rr fend_tlouy:t 15
and fd_close
--126--9412--69 .6-..- L..OC. K LOACCKQU AICQREUD IR..ED. .
.--.
12--69124-69- 6-er- rnexo ec2
ute tree
--1269--412--69 6-..- . ..LO.CK ALOCCKQU IRACEQUDIR ED.. ..
.--.
12--6129694-6- --c fmdd_e oxutec iuts edcu rrfrenomtl y:ex ec5
Clearly, the printing of "lock acquired" overlap. I also controlled the return value of flock, which is always 0 (success).
Does flock work in a situation like this? Why do I still have the issues of garbled messages in the log file?
Bastian
(25 rep)
May 6, 2025, 08:09 PM
• Last activity: May 11, 2025, 03:36 PM
2
votes
1
answers
79
views
How to share a writable flock file in /tmp/ between two users?
It seems that Linux has tightened up security in /tmp in later kernels than 3.x and if `/tmp` has the sticky bit set another user may not modify a `0777` file. Are there any work arounds for sharing a `flock`'ed file? (I cannot create the file as `root` ahead of time, which apparently would work sin...
It seems that Linux has tightened up security in /tmp in later kernels than 3.x and if
/tmp
has the sticky bit set another user may not modify a 0777
file.
Are there any work arounds for sharing a flock
'ed file? (I cannot create the file as root
ahead of time, which apparently would work since root
owns /tmp
)
$ ll /tmp/zzz
-rwxrwxrwx 1 games games 1 Aug 23 11:35 /tmp/zzz
$ id
uid=1000(me) gid=1000(me) groups=1000(me)
$ /usr/bin/flock /tmp/zzz ls
flock: cannot open lock file /tmp/zzz: Permission denied
rrauenza
(852 rep)
Aug 23, 2024, 06:39 PM
• Last activity: Aug 23, 2024, 07:37 PM
-1
votes
1
answers
1603
views
What does "trap: SIGINT: bad trap" mean and how do I fix it?
I moved code to a new virtual server running Ubuntu 22. I have the following (edited to make names shorter) in my crontab so that it runs **myCommand** only if it is not already running: ```*/1 * * * * cusack flock -x -n ~myLock myCommand >> ~my.log 2>&1``` When it runs, it puts ```trap: SIGINT: bad...
I moved code to a new virtual server running Ubuntu 22.
I have the following (edited to make names shorter) in my crontab so that it runs **myCommand** only if it is not already running:
*/1 * * * * cusack flock -x -n ~myLock myCommand >> ~my.log 2>&1
When it runs, it puts : SIGINT: bad trap
in my logfile. It did not do this on the old server. Am I doing something wrong?
My goal is, as alluded to above, to just have one copy of myCommand running at a time. If it is still running from the last time, it should do nothing. Other posts led me to believe that this was a proper way to do this, and it worked on my old server. So why does it give me this error message?
I tried adding a "-E 0" thinking that a 0 might mean everything is OK as it does in some contexts, but that didn't work. I added "-x", which didn't work and shouldn't change anything since according to the documentation it is the default anyway.
ferzle
(3 rep)
May 29, 2024, 09:17 PM
• Last activity: May 30, 2024, 07:55 PM
2
votes
1
answers
270
views
When I use flock it exits immediately instead of waiting
I was confused for a very long time with the meaning of the `-n` flag for `flock(1)`. Basically there are many guides for this tool, and often what we see is some command like `flock -n 100`. Here, fd number 100 is associated with some lockfile and used to perform locking. Today I kept getting confu...
I was confused for a very long time with the meaning of the
-n
flag for flock(1)
.
Basically there are many guides for this tool, and often what we see is some command like flock -n 100
. Here, fd number 100 is associated with some lockfile and used to perform locking.
Today I kept getting confused because I would do some simple tests, and flock
would exit with failure immediately.
What exactly does the -n
flag of flock
do? Am I right in thinking that -n 100
associates file descriptor number 100 with some lockfile?
Steven Lu
(2422 rep)
Mar 7, 2024, 06:00 PM
• Last activity: Mar 7, 2024, 06:25 PM
2
votes
1
answers
107
views
Flock and bash strange chicken and egg problem
In which order are the distinct steps of this bash command done: ```bash (flock -n 9) 9> toto.txt ``` If I do only the subshell part: ```bash (flock -n 9) ``` I get this result: `flock: 9: Mauvais descripteur de fichier` (Wrong file descriptor). Hence, I would assume that the subshell spawn opens th...
In which order are the distinct steps of this bash command done:
(flock -n 9) 9> toto.txt
If I do only the subshell part:
(flock -n 9)
I get this result:
flock: 9: Mauvais descripteur de fichier
(Wrong file descriptor).
Hence, I would assume that the subshell spawn opens the file descriptor 9 first with (...) 9> toto.txt
.
But If I do:
(ls -l /proc/$$/fd) 9> toto.txt
total 0
lrwx------ 1 laurent laurent 64 déc. 16 00:24 0 -> /dev/pts/2
lrwx------ 1 laurent laurent 64 déc. 16 00:24 1 -> /dev/pts/2
lrwx------ 1 laurent laurent 64 déc. 16 00:24 2 -> /dev/pts/2
lrwx------ 1 laurent laurent 64 déc. 16 00:24 255 -> /dev/pts/2
The file descriptor 9 is not listed.
Hence it seems that flock is responsible for opening it?
Can someone explain what are the steps and their order for the "handshake" between the inside of the subshell and the outside of it?
Laurent Lyaudet
(137 rep)
Dec 16, 2023, 12:05 AM
• Last activity: Dec 16, 2023, 11:48 AM
-2
votes
1
answers
323
views
Are there any plans for Linux to add higher-level things like Windows' WaitForMultipleObjects?
WaitForMultipleObjects is one of several Windows kernel functions that can suspend and synchronize a calling thread with other threads until resources or etc are available, similar to flock in Linux, but handles everything but file locking. WaitForMultipleObjects supports an array of events (can be...
WaitForMultipleObjects is one of several Windows kernel functions that can suspend and synchronize a calling thread with other threads until resources or etc are available, similar to flock in Linux, but handles everything but file locking.
WaitForMultipleObjects supports an array of events (can be a mixture of change notifications, console input, events, memory notifications, mutexes, processes, semaphores, threads, and timers), a timeout or polling option, and an AND/OR option and reports which fired first, and it can be used independently by multiple threads at once without knowledge of each other.
(I was looking for an IPC lock with timeout and things like using SIGALRM with flock where suggested which I can't risk using because SIGALRM might be in use in other multi-threaded libraries I don't have source to. I settled on using polling with LOCK_NB and tiny sleeps, and I am pretty sure I am not losing any "fair lock" benefits.)
Codemeister
(1 rep)
Jan 9, 2022, 06:13 PM
• Last activity: Aug 19, 2023, 09:41 AM
5
votes
3
answers
4799
views
Flocking a filedescriptor in a shell script
I thought this would give me uninterrupted `begin-end` pairs, but it doesn't: #!/bin/bash fun()( flock 1 || { echo >&2 "$BASHPID: FAIL: $?"; exit 1; } echo "$BASHPID begin" sleep 1; echo "$BASHPID end" ) fun & fun & fun & fun & fun & fun & fun & fun & fun & wait What am I doing wrong?
I thought this would give me uninterrupted
begin-end
pairs,
but it doesn't:
#!/bin/bash
fun()(
flock 1 || { echo >&2 "$BASHPID: FAIL: $?"; exit 1; }
echo "$BASHPID begin"
sleep 1;
echo "$BASHPID end"
)
fun &
fun &
fun &
fun &
fun &
fun &
fun &
fun &
fun &
wait
What am I doing wrong?
Petr Skocik
(29590 rep)
Jun 28, 2016, 08:41 PM
• Last activity: Apr 7, 2023, 06:00 AM
0
votes
0
answers
216
views
Force running a script using flock?
If given a lock on a script using flock(), is it possible to make a force run/unlock based on some argument passed to the script?
If given a lock on a script using flock(), is it possible to make a force run/unlock based on some argument passed to the script?
derwian36
(1 rep)
Oct 27, 2022, 08:05 AM
1
votes
0
answers
207
views
Ubuntu user not affected by sticky bit on Ubuntu 22.04
I experience a strange behaviour in stick bit on /tmp directory and flock command. Tried with two cases: Case 1: create file with Ubuntu user, root have no access to the created file. ubuntu@:~$ touch -a /tmp/ubuntu_user_created.lck ubuntu@:~$ flock -n /tmp/ubuntu_user_created.lck -c "echo 123" 123...
I experience a strange behaviour in stick bit on /tmp directory and flock command. Tried with two cases:
Case 1: create file with Ubuntu user, root have no access to the created file.
ubuntu@:~$ touch -a /tmp/ubuntu_user_created.lck
ubuntu@:~$ flock -n /tmp/ubuntu_user_created.lck -c "echo 123"
123
ubuntu@:~$ sudo flock -n /tmp/ubuntu_user_created.lck -c "echo 123"
flock: cannot open lock file /tmp/ubuntu_user_created.lck: Permission denied
Case 2: create file with root user, root and Ubuntu user have access to the created file.
ubuntu@:~$ sudo touch -a /tmp/root_user_created.lck
ubuntu@:~$ flock -n /tmp/root_user_created.lck -c "echo 123"
123
ubuntu@:~$ sudo flock -n /tmp/root_user_created.lck -c "echo 123"
123
Permission in the two files:
ls -la /tmp/
total 52
drwxrwxrwt 12 root root 4096 Oct 6 08:08 .
drwxr-xr-x 19 root root 4096 Oct 6 03:42 ..
-rw-r--r-- 1 root root 0 Oct 6 07:56 root_user_created.lck
-rw-rw-r-- 1 ubuntu ubuntu 0 Oct 6 07:54 ubuntu_user_created.lck
I don't understand why Ubuntu user can run the command
flock -n /tmp/root_user_created.lck
successfully, since the file root_user_created.lck
is owned by root, does the flock command just want to open this file with read mode?
If the flock command only need a read access, so why run the command flock -n /tmp/ubuntu_user_created.lck
command with root privileges return permission denied?
Tien Dung Tran
(131 rep)
Oct 6, 2022, 12:12 PM
0
votes
2
answers
109
views
Shared locking of scripts that may call each other
This is unusual problem and probably it is the consequence of bad design. If somebody can suggest anything better, I'd happy to hear. But right now I want to solve it "as is". There is a bunch of interacting scripts. It doesn't matter for the sake of the question, for for completeness, these scripts...
This is unusual problem and probably it is the consequence of bad design. If somebody can suggest anything better, I'd happy to hear. But right now I want to solve it "as is".
There is a bunch of interacting scripts. It doesn't matter for the sake of the question, for for completeness, these scripts switch Oracle database standby node between PHYSICAL STANDBY and SNAPSHOT STANDBY, create a snapshot database and add some grants for our reporting team, releasing obsolete archive logs in the process.
There are:
-
delete_archivelogs.sh
- switch_to_physical_standby.sh
, which also calls delete_archivelogs.sh
at the end
- switch_to_snapshot_standby.sh
- sync_standby.sh
, which calls switch_to_physical_standby.sh
, waits for standby to catch up and then calls switch_to_snapshot_standby.sh
The last sync_standby.sh
is typically run from the cron job, but each script also should be possible to run at will if DBA decides to do so.
Each script has a lock file based protection (via flock) from running twice. However, it is clear that these scripts need to have a shared common locking, for instance, it should be impossible to start switch_to_snapshot_standby.sh
(alone) while, say, sync_standby.sh
is running, so DBA won't accidentally run the one script while other is working.
Normally I just configure the same lock file in all scripts. In this case it is not possible, because if sync_standby.sh
acquire the lock, the called script won't run.
Which is the best way to have shared locking in this case? It is feasible to implement a "command line" switch to skip locking code and use it in calls from the parent script?
Nikita Kipriyanov
(1779 rep)
Aug 1, 2022, 12:30 PM
• Last activity: Aug 1, 2022, 04:33 PM
0
votes
1
answers
980
views
Testing file locking
I have a script which locks a file to avoid concurrent access to it, How can I execute this same script from two different terminals synchronously, to check if it works? Here is the script ``` #!/bin/bash ( flock -xn 200 trap 'rm /tmp/test_lock.txt' 0 RETVAL=$? if [ $RETVAL -eq 1 ] then echo $RETVAL...
I have a script which locks a file to avoid concurrent access to it, How can I execute this same script from two different terminals synchronously, to check if it works?
Here is the script
#!/bin/bash
(
flock -xn 200
trap 'rm /tmp/test_lock.txt' 0
RETVAL=$?
if [ $RETVAL -eq 1 ]
then
echo $RETVAL
echo "file already removed"
exit 1
else
echo "locked and removed"
fi
) 200>/tmp/test_lock.txt
Rob
(101 rep)
Apr 15, 2022, 01:47 PM
• Last activity: Apr 21, 2022, 06:50 PM
0
votes
2
answers
576
views
Can we tell if a command is being run by a process or not, by looking at the flock lock file alone?
Is [util-linux's `flock`][1] implemented based on `flock()` in Linux C API? Can we tell if a command is being run by a process or not, by looking at the lock file alone? I found that when a command guarded by `flock` finishes running, there seems no change to the lock file. Here is when it is runnin...
Is util-linux's
flock
implemented based on flock()
in Linux C API?
Can we tell if a command is being run by a process or not, by looking at the lock file alone?
I found that when a command guarded by flock
finishes running, there seems no change to the lock file. Here is when it is running and after it finishes running:
$ ls -l ../sleep.flock.file
-rw-rw-r-- 1 t t 0 Oct 30 14:01 ../sleep.flock.file
$ ls -l ../sleep.flock.file
-rw-rw-r-- 1 t t 0 Oct 30 14:01 ../sleep.flock.file
Thanks.
Tim
(106420 rep)
Oct 30, 2018, 06:09 PM
• Last activity: Dec 20, 2021, 09:11 PM
14
votes
4
answers
12051
views
Pass multiple commands to flock
flock -x -w 5 ~/counter.txt 'COUNTER=$(cat ~/counter.txt); echo $((COUNTER + 1)) > ~/counter.txt' How would I pass multiple commands to `flock` as in the example above? As far as I understand, `flock` takes different flags (-x for exclusive, -w for timeout), then the file to lock, and then the comma...
flock -x -w 5 ~/counter.txt 'COUNTER=$(cat ~/counter.txt); echo $((COUNTER + 1)) > ~/counter.txt'
How would I pass multiple commands to
flock
as in the example above?
As far as I understand, flock
takes different flags (-x for exclusive, -w for timeout), then the file to lock, and then the command to run. I'm not sure how I would pass two commands into this function (set variable with locked file's contents, and then increment this file).
My goal here is to create a somewhat atomic increment for a file by locking it each time a script tries to access the counter.txt
file.
d-_-b
(1197 rep)
Dec 29, 2013, 08:05 PM
• Last activity: Dec 20, 2021, 08:57 PM
0
votes
1
answers
274
views
Synchronizing access to shared, remote resource
I have a shared cache on a remote server that multiple clients are reading and writing to, so I need to synchronize access to this cache. I imagine I could: 1. SSH into the remote and acquire a flock on the server 2. Push the update to the server (rsync) 3. Release the flock The flock itself is work...
I have a shared cache on a remote server that multiple clients are reading and writing to, so I need to synchronize access to this cache. I imagine I could:
1. SSH into the remote and acquire a flock on the server
2. Push the update to the server (rsync)
3. Release the flock
The flock itself is working, but in order for the entire thing to work I need a way to start a process on the remote, that can acquire and hold the lock while I update the cache from the client. The flock should then be released from client in 3. or if the connection to the client is lost. Any ideas for how to accomplish this?
Btw: In my current setup it is not possible for the server to connect to the client and 'pull' the update over SSH allowing everything to be handled in a single script executed on the remote.
mola
(101 rep)
Sep 22, 2021, 01:46 PM
• Last activity: Sep 23, 2021, 01:06 PM
0
votes
2
answers
191
views
How to run eval with lockf command?
I have a command which I run via `eval` as shown below. ``` #! /bin/sh readonly scr="MYENV=1 sh /tmp/scr.sh" eval ${scr} -a 1 -b 2 ``` Now I want to run the `scr` script with `lockf` utility, so I made the following changes: ``` #! /bin/sh readonly scr="MYENV=1 sh /tmp/scr.sh" lockf -k /tmp/f.lock e...
I have a command which I run via
eval
as shown below.
#! /bin/sh
readonly scr="MYENV=1 sh /tmp/scr.sh"
eval ${scr} -a 1 -b 2
Now I want to run the scr
script with lockf
utility, so I made the following changes:
#! /bin/sh
readonly scr="MYENV=1 sh /tmp/scr.sh"
lockf -k /tmp/f.lock eval ${scr} -a 1 -b 2
This throws the following error:
lockf: eval: No such file or directory
Basically the limitation is the MYENV=1
which needs to be exported while running the command (thus the use of eval
).
I'm a beginner in shell programming and am unsure on how to get around this. How do I make this work?
Rahul Bharadwaj
(235 rep)
Aug 31, 2021, 04:20 AM
• Last activity: Aug 31, 2021, 07:44 AM
1
votes
0
answers
471
views
Is this flock usage with if-else safe?
I have two shell scripts that may be run in parallel but may also be run one at a time. Both scripts rely on an initial setup step that takes a while and can't be run by both at the same time. To allow for parallel execution for the remainder, I've wrapped the setup step in control flow using [`floc...
I have two shell scripts that may be run in parallel but may also be run one at a time. Both scripts rely on an initial setup step that takes a while and can't be run by both at the same time.
To allow for parallel execution for the remainder, I've wrapped the setup step in control flow using
flock
based on this recommendation .
This _seems_ to work fine, but I'm not sure if the if-else construct is fully safe. Are there any hidden issues here that I'm missing?
set -euxo pipefail
(
# Only run setup if we're the first to acquire the lock
if flock -x -n 200 ; then
# Time-consuming setup that can't be run in parallel
else
# Wait for the lock to be released and then continue, timeout on 600s
flock -x -w 600 200;
fi
) 200>/tmp/setup.lock
# Rest of the script that relies on setup being done
Etheryte
(227 rep)
May 17, 2021, 12:15 PM
• Last activity: May 17, 2021, 01:23 PM
13
votes
3
answers
10351
views
Handling of stale file locks in Linux and robust usage of flock
I have a script I execute via cron regularly (every few minutes). However the script should not run multiple times in parallel and it sometimes runs a bit longer, thus I wanted to implement some locking, i.e. making sure the script is terminated early if a previous instance is already running. Based...
I have a script I execute via cron regularly (every few minutes). However the script should not run multiple times in parallel and it sometimes runs a bit longer, thus I wanted to implement some locking, i.e. making sure the script is terminated early if a previous instance is already running.
Based on various recommendations I have a locking that looks like this:
lock="/run/$(basename "$0").lock"
exec {fd}"$lock"
flock -n $fd || exit 1
This should call the exit 1 in case another instance of the script is still running.
Now here's the problem: It seems sometimes a stale lock survives even though the script is already terminated. This effectively means the cron is never executed again (until the next reboot or by deleting the locked file), which of course is not what I want.
I figured out there's the lslocks command that lists existing file locks. It shows this:
(unknown) 2732 FLOCK WRITE 0 0 0 /run...
The process (2732 in this case) no longer exists (e.g. in ps aux). It is also unclear to me why it doesn't show the full filename (i.e. only /run...). lslocks has a parameter --notrucate which sounded to me it may avoid truncating filenames, however that does not change the output, it's still /run...
So I have multiple questions:
* Why are these locks there and what situation causes a lock from flock to exist beyond the lifetime of the process?
* Why does lslocks not show the full path/filename?
* What is a good way to avoid this and make the locking in the script more robust?
* Is there some way to cleanup stale locks without a reboot?
hanno
(171 rep)
Jun 20, 2020, 01:47 PM
• Last activity: Mar 18, 2021, 03:56 PM
-2
votes
2
answers
1652
views
is there a really simple and reliable way to create a unique lock (file) on linux? without using `flock`
EDIT: I learned how to use flock for exclusive lock and how to not mess with it: https://superuser.com/questions/1619940/flock-is-randomly-failing-on-desktop-pc-but-not-on-notebook-could-be-defectiv/. I think this question is unecessary as there is no need to use anything else than flock. But... "Re...
EDIT: I learned how to use flock for exclusive lock and how to not mess with it: https://superuser.com/questions/1619940/flock-is-randomly-failing-on-desktop-pc-but-not-on-notebook-could-be-defectiv/ . I think this question is unecessary as there is no need to use anything else than flock. But... "Repeated deletion of answered questions can result in your account being blocked from asking. Are you sure you wish to delete?" I wont delete this question to not mess my account. If left here may also help ppl to understand and use flock. Feel free to delete it tho.
---
OLD UNNECESSARY QUESTION
For example: I want to create a file lock to prevent a simultaneous backup attempt of the same file, that is run every time I start a new terminal, like on guake that can start several named tabs simultaneously.
I do not want to use
flock
. I have extreme difficulty understanding and trying to use it :(
I think the biggest problem is flock -x asdf.txt
where asdf.txt is a real existing file, and it gives "flock: bad file descriptor: 'asdf.txt'" and that feels like a not user friendly implementation. I got that example from the man page and I am stuck again. It feels like I am not being able to explain the problem, but the problem is on my test case (answer): I need a file lock to do things exclusively, and I always have a hard time trying to do that with flock...
Aquarius Power
(4537 rep)
Jan 13, 2021, 07:42 PM
• Last activity: Jan 22, 2021, 10:09 PM
0
votes
1
answers
207
views
rsync script work on CentOS 7, same script doesn't work on RHEL 7
I have a VM cluster with 3 nodes on CentOS7 and one node on RHEL7. There is a directory where rsync is enabled ```/mnt/ /portal/wso2telcohub-3.0.2/repository/deployment/server/synapse-configs/default/api/``` incrontab is set as below. ``` $ incrontab -l /mnt/ /portal/wso2telcohub-3.0.2/repository/de...
I have a VM cluster with 3 nodes on CentOS7 and one node on RHEL7. There is a directory where rsync is enabled
/mnt//portal/wso2telcohub-3.0.2/repository/deployment/server/synapse-configs/default/api/
incrontab is set as below.
$ incrontab -l
/mnt//portal/wso2telcohub-3.0.2/repository/deployment/server/synapse-configs/default/api/ IN_MODIFY,IN_ATTRIB,IN_CREATE,IN_DELETE /mnt/rsync/rsync-for-carbon-depsync.sh
rsync script with debug enabled
#!/bin/sh -ex
#source folder
portal=/mnt//portal/wso2telcohub-3.0.2/repository/deployment/server/synapse-configs/default/
#Destination folder
gateway=/mnt//gateway/wso2telcohub-3.0.2/repository/deployment/server/synapse-configs/default
LOG=/log/rsync/carbon-rsync-logs/"log-local-$(date +%Y%m%d_%H%M%S).log"
echo "entered the script" >> $LOG
#keep a lock to stop parallel runs
(
echo "entered the flock" >> $LOG
flock -e 10
echo "Obtained the lock" >> $LOG
echo " ========== $(date -Iseconds) Lock acquired by local thread > $LOG
rsync --delete -arv $portal $gateway >> $LOG
) 10> /var/rsync/.rsync.lock
echo " ========== $(date -Iseconds) Release Lock acquired by local thread =========== " >> $LOG
Below is the log file
entered the script
entered the script
Basically, whatever the change done on
should reflect on
.
I created a temp file in
directory. But it doesn't reflect in
directory.
This is only for the new RHEL 7 VM. Old CentOS 7 VM works fine with same script.
Pradeep Sanjeewa
(207 rep)
May 6, 2020, 10:17 AM
• Last activity: May 6, 2020, 03:04 PM
0
votes
1
answers
307
views
flock command script fail on xunbuntu 16.04 - cant understad why
#!/bin/bash ( flock -n 200 || exit 1 # commands executed under lock sleep 3 echo "TEST" ) 200 > /home/nis/Scripts/lock.txt Running this scrip gets me this error: lock.sh: 7: lock.sh: Syntax error: word unexpected I don't get why this happens. It works on my QNAP (Busybox)
#!/bin/bash
(
flock -n 200 || exit 1
# commands executed under lock
sleep 3
echo "TEST"
) 200 > /home/nis/Scripts/lock.txt
Running this scrip gets me this error:
lock.sh: 7: lock.sh: Syntax error: word unexpected
I don't get why this happens. It works on my QNAP (Busybox)
Nis
(1 rep)
Jan 14, 2017, 04:42 PM
• Last activity: May 9, 2019, 09:35 PM
Showing page 1 of 20 total questions