Shared locking of scripts that may call each other
0
votes
2
answers
109
views
This is unusual problem and probably it is the consequence of bad design. If somebody can suggest anything better, I'd happy to hear. But right now I want to solve it "as is".
There is a bunch of interacting scripts. It doesn't matter for the sake of the question, for for completeness, these scripts switch Oracle database standby node between PHYSICAL STANDBY and SNAPSHOT STANDBY, create a snapshot database and add some grants for our reporting team, releasing obsolete archive logs in the process.
There are:
-
delete_archivelogs.sh
- switch_to_physical_standby.sh
, which also calls delete_archivelogs.sh
at the end
- switch_to_snapshot_standby.sh
- sync_standby.sh
, which calls switch_to_physical_standby.sh
, waits for standby to catch up and then calls switch_to_snapshot_standby.sh
The last sync_standby.sh
is typically run from the cron job, but each script also should be possible to run at will if DBA decides to do so.
Each script has a lock file based protection (via flock) from running twice. However, it is clear that these scripts need to have a shared common locking, for instance, it should be impossible to start switch_to_snapshot_standby.sh
(alone) while, say, sync_standby.sh
is running, so DBA won't accidentally run the one script while other is working.
Normally I just configure the same lock file in all scripts. In this case it is not possible, because if sync_standby.sh
acquire the lock, the called script won't run.
Which is the best way to have shared locking in this case? It is feasible to implement a "command line" switch to skip locking code and use it in calls from the parent script?
Asked by Nikita Kipriyanov
(1779 rep)
Aug 1, 2022, 12:30 PM
Last activity: Aug 1, 2022, 04:33 PM
Last activity: Aug 1, 2022, 04:33 PM