Sample Header Ad - 728x90

How can I limit the bandwidth used by a process?

58 votes
6 answers
67087 views
I have a CentOS 5.7 server that will be backing up its files nightly. I am concerned that visitors to the various sites that the server hosts will experience degraded performance while the backup is transferring across the network. Is it possible to limit a process's maximum allowed throughput to a network interface? I would like to limit the SSH-based file transfer to only half of my available bandwidth. This could be on the server or client side; that is, I'd be happy to do this on either the client that initiates the connection or the server that receives the connection. (Unfortunately, I can't add an interface to dedicate to backups. I could increase my available throughput, but that would merely mean that the network transfer would complete faster, but still max the total capacity of the connection while doing it.) ---------- ## Some Background Perhaps some background is in order. Stepping back, I had a problem with not having enough local space to create the backup itself. Enter SSHFS! The backup is saved to what is ostensibly a local drive so that no backup bits are ever on the web server itself. Why is that important? Because that would seem to invalidate the use of the venerable rsync --bwlimit. rsync isn't actually doing the transfer nor **can** it because I can't even spare the space to save the backup file. I can hear you ask: "So wait, why do you even need to make a backup file? Why not just rsync the source files and folders?" Because an annoying thing called "Plesk" is in the mix! This is my client-facing web host which uses Plesk for convenience. As such, I use Plesk to initiate the backups because Plesk adds all sorts of extra magic to the backup that makes consuming it during a restoration procedure very safe. *sad face*
Asked by Wesley (14723 rep)
Mar 14, 2012, 12:24 AM
Last activity: Mar 25, 2025, 10:17 AM