Sample Header Ad - 728x90

Linux server: How does memory get allocated to parallel processes?

2 votes
1 answer
303 views
I'm a complete newbie, so please excuse my ignorance and/or potentially wrong terminology. I'm using an Ubuntu server through ssh for brain image processing (one command with several programs, execution takes ~4-5 hours per brain), which I run through the Terminal. Since the server has limited storage (~200GB) and the brain data are big (2-3 GB input, 500 MB output), I'm constantly downloading processed data and uploading new to-be-processed ones, using FileZilla. The brain image processing is quite RAM-intensive and has failed several times due to memory issues, so I'm now doing these two procedures (procedure 1=brain image processing vs. procedure 2=uploading/downloading) separately and manually -- i.e., when I'm doing one, I won't do the other at the same time. But I was wondering if there's a more efficient way of doing this, while still ensuring that the brain image processing doesn't fail. In a nutshell, I would like procedure 1 to take up as much RAM as it needs, with the "rest" being allocated to procedure 2. I'm currently assigning procedure 1 all 8 cores, but it only uses all 8 only so often (because of how the program is written). Is there a way to achieve this, ideally one that still allows me to use FileZilla (because it's so fast and simple, though I'm not opposed to uploading/downloading through the Terminal)? For example, might it be the case that whichever process I start first takes "precedence" and just takes whatever memory it needs at a given point in time and any other processes just take what's left? Or how does RAM get allocated between concurrently running processes (especially if started from different software, if that matters)? I hope all of this made sense. Thanks in advance!
Asked by Jana (29 rep)
Feb 6, 2022, 11:59 AM
Last activity: Feb 13, 2022, 08:23 PM