Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
3
votes
1
answers
5513
views
How to run batch sas job in unix sas?
I have 5 SAS jobs that I need to run sequentially, one after the other. I typically type in `nohup sas filename1.sas &` in the command line to run and manually check for progress every few hours. If the 1st job is complete and no error, I then type in the 2nd job `nohup sas filename2.sas &` . Is the...
I have 5 SAS jobs that I need to run sequentially, one after the other.
I typically type in
nohup sas filename1.sas &
in the command line to run and manually check for progress every few hours.
If the 1st job is complete and no error, I then type in the 2nd job nohup sas filename2.sas &
.
Is there a sas code or unix command I can run them sequentially rather than manually checking progress?
I thought about using %include statement in a master sas file, however I have many loop macros and do if then macros which throw the %include off I believe.
PS. I also need the log and lst file to be printed, typically it's printed for me automatically using the command above.
SuperKing
(81 rep)
Aug 15, 2016, 09:14 PM
• Last activity: Jun 5, 2025, 08:02 PM
5
votes
2
answers
2390
views
Job queueing on a single machine
I have a shiny new server for running simulations on, with a pair of Tesla GPUs and 32 cores, running CentOS 7.2. I'd like for multiple users to be able to submit jobs to the server that get queued up and run when the previous finishes, preferably with some sort of prioritisation system and time lim...
I have a shiny new server for running simulations on, with a pair of Tesla GPUs and 32 cores, running CentOS 7.2. I'd like for multiple users to be able to submit jobs to the server that get queued up and run when the previous finishes, preferably with some sort of prioritisation system and time limit, like PBS/TORQUE but for a single machine rather than a cluster. I know I can install and configure TORQUE for a single machine, but it seems like overkill - theoretically, the scheduler should only have to run when jobs finish or run overtime. I can probably homebrew a set of scripts, but I was wondering if a solution already exists?
Yoshanuikabundi
(51 rep)
Feb 5, 2016, 12:24 AM
• Last activity: May 20, 2025, 12:05 AM
0
votes
1
answers
160
views
SLURM error and output files with custom variable name
I'd like my log files to be named after a variable. Since this isn't possible: #SBATCH --output some_software.${var}.out #SBATCH --error some_software.${var}.err I came across this work around but not sure if it's ok, or if it's wrong or if there's a better way: #!/bin/bash #SBATCH --job-name my_job...
I'd like my log files to be named after a variable. Since this isn't possible:
#SBATCH --output some_software.${var}.out
#SBATCH --error some_software.${var}.err
I came across this work around but not sure if it's ok, or if it's wrong or if there's a better way:
#!/bin/bash
#SBATCH --job-name my_job
#SBATCH --partition some_partition
#SBATCH --nodes 1
#SBATCH --cpus-per-task 8
#SBATCH --mem 50G
#SBATCH --time 6:00:00
#SBATCH --get-user-env=L
#SBATCH --export=NONE
var=$1
# SLURM doesn't support variables in #SBATCH --output, so handle logs manually
if [ -z "$SLURM_JOB_ID" ]; then
log_dir="/data/logs"
sbatch --output="${log_dir}/some_software.${var}.out" \
--error="${log_dir}/some_software.${var}.err" \
"$0" "$var"
exit
fi
/data/software/nextflow run /data/software/some_software/main.nf
Caterina
(103 rep)
May 12, 2025, 10:46 PM
• Last activity: May 13, 2025, 04:09 PM
0
votes
0
answers
92
views
slurm array job, run a specific task only once
I keep overthinking about how can I optimize my pipeline. Essentially, I have multiple tools that will be executed on two haplotypes (**hap1** and **hap2**) for a plant species. The general structure is as follows: ``` > tree INLUP_00001 INLUP_00001 ├── 1.pre_scaff ├── 2.post_scaff ├── INLUP00233.fa...
I keep overthinking about how can I optimize my pipeline. Essentially, I have multiple tools that will be executed on two haplotypes (**hap1** and **hap2**) for a plant species.
The general structure is as follows:
> tree INLUP_00001
INLUP_00001
├── 1.pre_scaff
├── 2.post_scaff
├── INLUP00233.fastq.gz
├── INLUP00233.filt.fastq.gz
├── INLUP00233.meryl
├── hap1
├── hap2
└── hi-c
├── INLUP00001_1.fq.gz
└── INLUP00001_2.fq.gz
(I will have 16 of these INLUP_????? parent directories)
So, with this in mind I organized a job array which reads from the following file
path/to/INLUP_00001/hap1
path/to/INLUP_00001/hap2
path/to/INLUP_00002/hap1
path/to/INLUP_00002/hap2
.
.
.
where I have a variable – ${HAP} – that discriminates which haplotype I'm working on, in which sub-directory the data will be written, and eventual names for each output. This seems to best optimize runtime and resource allocation.
However, there is a problem with the very first tool I'm using; this application is the one generating both **hap1** and **hap2** and does not accept the ${HAP} variable. In other words, I have no control on the outputs based on my job array list which will redundantly execute this single command 32 times not only causing issues but also wasting time and resources...
Is there a way to control for the execution of this command only one time for each INLUP sample while preserving the control on haplotypes with the ${HAP} variable within the job array?
I thought about alternatives with for
cycles applied to all other tools in the pipeline to accommodate **hap1** and **hap2**, but they ended up making the script overly long in my opinion and more complex... also the resources allocated for the first tool cannot be easily partitioned/assigned to independent tasks for **hap1** and **hap2** for the other tools.
Any idea/help is much appreciated, sorry for the long message if more context is needed I can provide a short MWE of the first few commands.
Matteo
(209 rep)
Oct 31, 2024, 01:39 PM
12
votes
3
answers
8949
views
PBS equivalent of 'top' command: avoid running 'qstat' repeatedly
When I run several jobs on a head node, I like to monitor the progress using the command `top`. However, when I'm using PBS to run several jobs on a cluster, `top` will of course not show these jobs, and I have resorted to using 'qstat'. However the `qstat` command needs to be run repeatedly in orde...
When I run several jobs on a head node, I like to monitor the progress using the command
top
.
However, when I'm using PBS to run several jobs on a cluster, top
will of course not show these jobs, and I have resorted to using 'qstat'. However the qstat
command needs to be run repeatedly in order to continue monitoring the jobs. top
updates in real-time, which means I can have the terminal window open on the side and glance at it occasionally while doing other work.
Is there a way to monitor in real-time (as the top
command would do) the jobs on a cluster that I've submitted using the PBS command qsub
?
I was surprised to see so little, after extensive searching on Google.
Nike Dattani
(257 rep)
Jul 22, 2014, 07:32 AM
• Last activity: Jun 19, 2024, 12:04 PM
1
votes
3
answers
2151
views
Option to cancel job by jobname not ID?
Is it possible to delete multiple [TORQUE][1] batch jobs with the same name, instead of typing in each individual job number? I do not want to use the `qdel -u username` option, as I have other jobs that I want to spare. It is +100 individual jobs so would rather not type in each jobnumber if there'...
Is it possible to delete multiple TORQUE batch jobs with the same name, instead of typing in each individual job number?
I do not want to use the
qdel -u username
option, as I have other jobs that I want to spare. It is +100 individual jobs so would rather not type in each jobnumber if there's a quicker option!
I found this option online;
~~~
qdel wc_jobname
~~~
But it returns the error,
> qdel: illegally formed job identifier: wc_jobname
Leucine
(11 rep)
Nov 21, 2019, 11:51 AM
• Last activity: Jun 9, 2024, 09:39 PM
20
votes
2
answers
35718
views
How to batch resize all images in a folder (including subfolders)?
I have huge, 12GB, gallery on the server, full of images in various subfolders. Those files are too big and not used in full resolution. I need to resize all the images down to 820px wide (keeping proportions). So my question is - how can I create some kind of crawling script which would resize all...
I have huge, 12GB, gallery on the server, full of images in various subfolders. Those files are too big and not used in full resolution. I need to resize all the images down to 820px wide (keeping proportions). So my question is - how can I create some kind of crawling script which would resize all images bigger then 820px and save them back overwriting original file?
Hope you can help me :-) Thank you in advance.
G-Gore
(201 rep)
Apr 15, 2015, 01:07 PM
• Last activity: Apr 4, 2024, 03:58 PM
0
votes
1
answers
172
views
Terminate batch job quietly
What signal code do I use with the kill command to terminate a background job quietly? Or perhaps an environment variable setting in the batch environment? Specifically, the "[1]+ Terminated *jobname* message. Thanks in advance.
What signal code do I use with the kill command to terminate a background job quietly? Or perhaps an environment variable setting in the batch environment?
Specifically, the "+ Terminated *jobname* message.
Thanks in advance.
John-L_.
(3 rep)
Feb 20, 2024, 08:33 PM
• Last activity: Feb 20, 2024, 08:48 PM
1
votes
3
answers
1047
views
wget — download multiple files over multiple nodes on a cluster
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once. Given this situation, I would like to know whether I can use `wget` from a...
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once.
Given this situation, I would like to know whether I can use
wget
from a batch file (*see* example below) to assign each download to a separate node to carry out independently.
**batch_file.txt**
-O DNK07.bam
-O mixe0007.bam
-O IHW9118.bam
.
.
In principle, this will not only speed up things but also prevent the run from failing since the wall-time for this execution is 24h, and it won't be enough to download all those files on a single machine consecutively.
This is what my BASH script looks like:
#!/bin/bash
#
#SBATCH --nodes=279 --ntasks=1 --cpus-per-task=1
#SBATCH --time=24:00:00
#SBATCH --mem=10gb
#
#SBATCH --job-name=download
#SBATCH --output=sgdp.out
##SBATCH --array=[1-279]%279
#
#SBATCH --partition=
#SBATCH --qos=
#
#SBATCH --account=
#NAMES=$1
#d=$(sed -n "$SLURM_ARRAY_TASK_ID"p $NAMES)
wget -i sgdp-download-list.txt
As you can see I was thinking to use an array job
(not sure whether will work); alternatively, I thought about allocating 279 nodes hoping SLURM would haven been clever enough to send each download to a separate node (not sure about it...). If you are aware of a way to do so efficiently, any suggestion is welcome.
Thanks in advance!
Matteo
(209 rep)
Oct 23, 2023, 11:48 AM
• Last activity: Dec 4, 2023, 06:36 PM
49
votes
9
answers
160364
views
How to download files and folders from Onedrive using wget?
How to use wget to download files from Onedrive? (and batch files and entire folders, if possible)
How to use wget to download files from Onedrive? (and batch files and entire folders, if possible)
charles
(974 rep)
Aug 17, 2015, 04:05 PM
• Last activity: Aug 24, 2023, 06:45 AM
0
votes
1
answers
373
views
Batch writing text to speech audio files from .csv file
I am new to using ZSH on Mac and am looking for help to write a simple shell script to batch create WAV audio files from a .csv file I have created a .csv file with data in 3 columns, A B C. Wanting to have text to speech read each line and write that line to a .WAV file A sample would be: Folder “T...
I am new to using ZSH on Mac and am looking for help to write a simple shell script to batch create WAV audio files from a .csv file
I have created a .csv file with data in 3 columns, A B C. Wanting to have text to speech read each line and write that line to a .WAV file
A sample would be:
Folder “TextToConvert”
.csv file “HelloSounds.csv”
A, B, C
Hello, my name is, Fred
Hello, your name is, Anne
Hello, is your name, Charles
Not sure how to select the voice or have the files named to reflect their content.
Any help is appreciated, thanks.
Zolholb
(1 rep)
Aug 17, 2022, 12:33 AM
• Last activity: Sep 26, 2022, 03:32 AM
0
votes
2
answers
883
views
ZSh script to batch process filenames containing spaces and substitute extension
Is there a zsh script that can convert a directory files in one music format and be converted into Ogg opus format. The *filenames have spaces in their names*. For example, a directory contains 10 files with *.wma extensions Files are converted into `*.wav` format using `ffmpeg -i filename.wma filen...
Is there a zsh script that can convert a directory files in one music format and be converted into Ogg opus format. The *filenames have spaces in their names*.
For example, a directory contains 10 files with *.wma extensions
Files are converted into
*.wav
format using ffmpeg -i filename.wma filename.wav
The *.wav
files are converted to opus using opusenc --bitrate 160 filename.wav filename.opus
*Update*: ffmpeg -i filename.wma -c:a libopus -b:a 128k filename.opus
converts the file with one command
A partially working script will process filenames in current directory from .wma to .wav, even with spaces. However, the .wav
extension is added rather than replacing the *.wma file extension
This script was added to a file called convert
and that file made executable
IFS=$'\n'
for x in *.wma ; do
echo $x
ffmpeg -i "$x" $x.wav
done
Trying to use Zsh modifiers to substitute the filename extension with: ${x:e}".wav
(and after a suggestion by ilkkachu I also tried ${x:r}".wav
)
IFS=$'\n'
for x in *.wma ; do
ffmpeg -i "$x" "${x:e}".wav
done
Calling this from a file called convert
, the following error is returned
./convert: 3: Bad substitution
The same error happens with
IFS=$'\n'
for x in *.wma ; do
ffmpeg -i "$x" "${x:r}".wav
done
I assume the syntax is not quite write or modifiers do not work when filenames have spaces. Or I still have a lot to learn about zsh :)
Is there a correct way to substitute a filename extension in Zsh (when file names contain spaces)
Thank you.
practicalli-johnny
(101 rep)
May 24, 2022, 07:39 AM
• Last activity: May 25, 2022, 05:17 AM
4
votes
0
answers
1271
views
Distinguish between error and "success" in scanimage-batch
I'm running a little script with the `scanimage` batch-command on a remote server and would like to know if and how the scan has been done the batch. Therefore the script requires a proper "error"-description to handle the next steps. Yet `scanimage` does return a pretty *odd* message: scanimage: sa...
I'm running a little script with the
scanimage
batch-command on a remote server and would like to know if and how the scan has been done the batch. Therefore the script requires a proper "error"-description to handle the next steps.
Yet scanimage
does return a pretty *odd* message:
scanimage: sane_start: Document feeder out of documents
So the whole output looks like this if there was a success:
scanscript "scanimage --device='brother4:net1;dev0' --format tiff --resolution=150 --source 'Automatic Document Feeder(left aligned,Duplex)' -l 0mm -t 0mm -x210mm -y297mm --batch=$(date +%Y%m%d_%H%M%S)_p%04d.tiff" "/home/qohelet/scans/images/281/" "myscan"
scanimage: rounded value of br-x from 210 to 209.981
scanimage: rounded value of br-y from 297 to 296.973
Scanning -1 pages, incrementing by 1, numbering from 1
Scanning page 1
Scanned page 1. (scanner status = 5)
Scanning page 2
Scanned page 2. (scanner status = 5)
Scanning page 3
scanimage: sane_start: Document feeder out of documents
Technically this is correct, yes - but this happens always when the job is done. In case I haven't put any paper into the feeder it looks like that:
scanscript "scanimage --device='brother4:net1;dev0' --format tiff --resolution=150 --source 'Automatic Document Feeder(left aligned,Duplex)' -l 0mm -t 0mm -x210mm -y297mm --batch=$(date +%Y%m%d_%H%M%S)_p%04d.tiff" "/home/qohelet/scans/images/281/" "myscan"
scanimage: rounded value of br-x from 210 to 209.981
scanimage: rounded value of br-y from 297 to 296.973
Scanning -1 pages, incrementing by 1, numbering from 1
Scanning page 1
scanimage: sane_read: Error during device I/O
Scanned page 1. (scanner status = 9)
The error 9 is unfortunately just one part of the output. How can I distinguish whether it was thrown or not?
In my scanscript
I use if to evaluate whether or not the scan was successful:
if eval $1; then
#Do stuff
else
#Do error stuff and exit with error code
fi
Unfortunately when using scanimage
with a batch it's always counted as a failure.
Is there a way to find out what actually happened?
Seems someone had a similar issue with a different scanner (I have a Brother-Scanner, but that's not really related to the issue) already:
http://sane.10972.n7.nabble.com/Issue-with-Fujitsu-ScanSnap-iX500-td18589.html
But the topic was not continued there, yet now I'm stuck here and would like to know what to do.
Qohelet
(527 rep)
Jun 16, 2017, 08:05 PM
• Last activity: May 18, 2022, 04:51 PM
7
votes
4
answers
2297
views
atd, batch // Setting the load limiting factor
I am launching non interactive jobs using `batch`, and I would like to increase the load limiting factor in order to use all 8 of my cores. I am on Ubuntu 16.04 LTS. From what I understand, `batch` uses `atd` to do the jobs. Jobs start when the load factor goes under a threshold, called the *load li...
I am launching non interactive jobs using
batch
, and I would like to increase the load limiting factor in order to use all 8 of my cores. I am on Ubuntu 16.04 LTS.
From what I understand, batch
uses atd
to do the jobs. Jobs start when the load factor goes under a threshold, called the *load limiting factor*. It is said in the man
of atd
that we can change this factor using the -l
option.
My question: how can I use this atd -l XX
option? When I type, for instance, atd -l 7.2
before batch
, it doesn't seem to be changing anything.
What I have found so far:
- In this question https://unix.stackexchange.com/questions/292555/how-to-run-bash-script-via-multithreading/292598#292598 , one contributor proposes to do this in the 'atd
service starting script'. I guess that it refers to the /etc/init.d/atd
, but I do not know what to change there, cf next bullet point.
- I have found pages, such as this one http://searchitchannel.techtarget.com/feature/Understanding-run-level-scripts-in-Fedora-11-and-RHEL , where they propose to: "modify the following line (in the start section) of the /etc/init.d/atd
script: daemon /usr/sbin/atd
. Replace it with this line, using the -l
argument to specify the new minimum system load value: daemon /usr/sbin/atd -l 1.6
". However, there is no such a line in /etc/init.d/atd
.
It seems that it can be introduced in the /etc/init.d/atd
, but I do not know where. I have never changed such files.
So, how can I change the load limiting factor used by the batch
command?
ciliou
(91 rep)
Aug 13, 2016, 01:58 PM
• Last activity: Mar 2, 2022, 05:06 PM
0
votes
1
answers
1358
views
zsh: bad subscript for direct array assignment: 0
I have the following bashscript below. My goal is to itterate over multiple files in the directory. The name of the files will be **batch_1, batch_2, batch_3, batch_4**, etc. There should be no more than 7 batches. I have the following script below. files=( $(echo 1) ) declare -p files declare -a fi...
I have the following bashscript below. My goal is to itterate over multiple files in the directory. The name of the files will be **batch_1, batch_2, batch_3, batch_4**, etc. There should be no more than 7 batches.
I have the following script below.
files=( $(echo 1) )
declare -p files
declare -a files=(="batch_")
for data in ${files[@]}
do
cat ${data} | cut -d , -f2,3 | grep -v "IP" > data_ip_${data}
done
However, when I run this I receive the error zsh: **bad subscript for direct array assignment: 0**
Does any know what can cause this specifically with my script? Or possible solutions? Any advice may help.
Lam
(1 rep)
Jan 26, 2022, 04:12 AM
• Last activity: Jan 26, 2022, 04:53 AM
1
votes
1
answers
1215
views
Recursive (batch) video codec details with MediaInfo CLI
I want to share my script to do this with Media Info CLI and python. At first I tried with pure bash but should have just gone python at first, much quicker and adaptable (for me). My task was to recursively go through all files in a specified folder (in this case on a NAS), and print as well as sto...
I want to share my script to do this with Media Info CLI and python. At first I tried with pure bash but should have just gone python at first, much quicker and adaptable (for me).
My task was to recursively go through all files in a specified folder (in this case on a NAS), and print as well as store in a txt file all the video codec and profile level used in each.
The reason be I found some older Samsung TV wont play H264 with profile level greater than 4.1 , so some re-encoding was in order, also the latest Samsung TV have dropped support for xvid/divx.
Hayden Thring
(272 rep)
Oct 20, 2021, 05:36 AM
• Last activity: Dec 13, 2021, 09:18 AM
0
votes
0
answers
63
views
In middle of the bash script next job/code takes time to trigger if first is completed
I have a script which have my codes: ``` # scheduling details for abc.sas, first status check after 1 hr , status check interval is 30 mins execute "claimant.sas" echo "execute function done for abc.sas" execution_check "ERA_abc" "3600" status_check "ERA_abc" "abc.log" "1800" "abc.sas" "3600" echo "...
I have a script which have my codes:
# scheduling details for abc.sas, first status check after 1 hr , status check interval is 30 mins
execute "claimant.sas"
echo "execute function done for abc.sas"
execution_check "ERA_abc" "3600"
status_check "ERA_abc" "abc.log" "1800" "abc.sas" "3600"
echo "status check done for abc.sas"
echo
echo
# scheduling details for dce.sas, first status check after 1 hr , status check interval is 30 mins
execute "dce.sas"
echo "execute function done for dce.sas"
execution_check "ERA_dce" "3600"
status_check "ERA_dce" "dce.log" "1800" "dce.sas" "3600"
echo "status check done for dce.sas"
echo
echo
......
now my problem is once i trigger my bash script so ERA_abc starts to run once its get completed ERA_dce should run immediately but it sometimes take an hour to get trigger ?
can you help me to know the reason and solution ? Thanks
Sonali Bhatt
(1 rep)
Dec 7, 2021, 04:26 AM
• Last activity: Dec 7, 2021, 04:48 AM
7
votes
4
answers
1171
views
Replacing counter in a filename for all files in a directory
After importing several 1000 Files from a camera onto a hard drive I realized, that the counter, used in the process of renaming the file - does not start from 0. This leads to file structure like this: ``` My vacation 2018-05-03 2345.jpg My vacation 2018-05-03 2346.jpg My vacation 2018-05-04 2347.j...
After importing several 1000 Files from a camera onto a hard drive I realized, that the counter, used in the process of renaming the file - does not start from 0. This leads to file structure like this:
My vacation 2018-05-03 2345.jpg
My vacation 2018-05-03 2346.jpg
My vacation 2018-05-04 2347.jpg
I would like to batch rename all those files in a wax, that the index starts with 0
My vacation 2018-05-03 0001.jpg
My vacation 2018-05-03 0002.jpg
My vacation 2018-05-04 0003.jpg
I went already through some topics dealing with batch renaming files and **adding** an counter/index (bash loop) or usig **rename/prename** but I was not able to get a working solution for my case.
Basically, I would like to match the part of the filename with the description and the date using the regular expression .*(\d\d\d\d\-\d\d\-\d\d){1}
and add a suffix counter on the end.
karlitos
(181 rep)
Nov 7, 2021, 04:45 PM
• Last activity: Nov 9, 2021, 05:08 PM
-1
votes
1
answers
40
views
Sync sudo anthority to all nodes
I want to submit a task that is interpreted by `/bin/csh`, which only exists in master node. And I have no root permission but only sudo, which is limited in master node. So I can't use `sudo apt install csh` in each calculation node. How to handle the case?
I want to submit a task that is interpreted by
/bin/csh
, which only exists in master node. And I have no root permission but only sudo, which is limited in master node. So I can't use sudo apt install csh
in each calculation node.
How to handle the case?
Zhihui
(1 rep)
Sep 24, 2021, 11:18 AM
• Last activity: Sep 25, 2021, 06:03 AM
1
votes
0
answers
1090
views
Parallel job submission of for loop
I have written ```for``` loop and parallelized it with ```&``` and limited the threshold to running ```3``` jobs at one time. Below is my script. I am reserving ```32 cores``` and ```256 GB``` memory through ```BSUB``` command. The ```sample_pipe``` I am running inside ```for``` loop requires ```32`...
I have written
loop and parallelized it with &
and limited the threshold to running
jobs at one time. Below is my script. I am reserving cores
and GB
memory through
command. The
I am running inside
loop requires
cores and GB
memory.
I am getting memory failure errors on some jobs. I think I am reserving only
cores and GB
and trying to run
jobs at a time which might be causing memory failure errors on some jobs.
My question is how do I parallelize such that all
jobs are using the same amount of cores and memory.
I submitted using the command < example.sh
#!/bin/bash
#BSUB -J cnt_job # LSF job name
#BSUB -o cnt_job.%J.out # Name of the job output file
#BSUB -e cnt_job.%J.error # Name of the job error file
#BSUB -n 32 # 32 cores
#BSUB -M 262144 # 256 GB
#BSUB -R "span[hosts=1] rusage [mem=262144]"
n=0
maxjobs=3
for sample in $main ; do
for run in $nested ; do
sample_pipe count --id="$run_name" \
--localcores=32 \
--localmem=256 &
done
cd ..
# limit jobs
if (( $(($((++n)) % $maxjobs)) == 0 )) ; then
wait # wait until all have finished
echo $n wait
fi
done
botloggy
(137 rep)
Sep 22, 2021, 02:14 PM
• Last activity: Sep 22, 2021, 02:28 PM
Showing page 1 of 20 total questions