I am trying to download a streaming mp3 using wget. this is my basic command:
wget http://sj128.hnux.com/sj128.mp3 -c --timeout=1 --waitretry=0 --tries=0 -O "file.mp3"
i have been doing this in a script (which lets this run for 1 hour), but what i have been infuriatingly finding is that my file would end up truncated and incomplete. for example, i would expect the file to be, say around 30MB and it would only be something like 13MB.
i didn't understand what was happening until i ran this command directly from the CLI and saw that eventually i'd always run into a "read timeout". this shouldn't be a show stopper. the -c and infinite retries should handle this FINE.
but instead, after a "read timeout" and a new retry, my file would stop growing even though the download continued.
why does the download continue but the file not continue to grow as expected?
i went so far as to create an elaborate script which started a completely new wget under a completely different file name to avoid a "file" type of conflict and even though ALL OUTPUT showed a completely different file name with a completely new process, IT STILL DIDN'T WRITE A NEW FILE!
in this case, why does the download appear to commence and my new file does't even show up!?
Asked by Low Information Voter
(3 rep)
Feb 7, 2019, 08:31 AM
Last activity: Feb 7, 2019, 04:54 PM
Last activity: Feb 7, 2019, 04:54 PM