Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

74 votes
5 answers
76751 views
Resume failed download using Linux command line tool
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch. How do I start from 400 MB itself?
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch. How do I start from 400 MB itself?
amolveer (979 rep)
Nov 4, 2014, 10:29 AM • Last activity: Jun 24, 2025, 06:49 PM
0 votes
0 answers
34 views
Make Qutebrowser download in specific directory
#### General overview The goal is to make Qutebrowser store downloads in a subdirectory of `~/downloads` named after the ISO date of the day when the download happend. #### What I alredy did In my `~/.config/qutebrowser/config.py`, I made this function: ```python def setDownloadDirectory(): imp...
#### General overview The goal is to make Qutebrowser store downloads in a subdirectory of ~/downloads named after the ISO date of the day when the download happend. #### What I alredy did In my ~/.config/qutebrowser/config.py, I made this function:
def setDownloadDirectory():
	import os
	from datetime import datetime

	# Formater current date
	today = datetime.now().strftime('%Y-%m-%d')
	download_dir = os.path.join(os.path.expanduser('~/Téléchargements/www'), today)

	# Create the directory if any
	if not os.path.exists(download_dir):
		os.makedirs(download_dir)

	# set the new directory
	c.downloads.location.directory = download_dir

setDownloadDirectory()
So, yes, when I try to download something I get the desired behaviour, Qutebrowser suggest ~/Téléchargements/www/2025-06-07/ as download directory, as desired. #### The problem So, what’s the problem? Well their is two main problems: 1. The date is set to the moment when Qutebrowser was opened and not to the moment when the download happens. Let’s suppose I open Qutebrowser at 23:50, the directory will be set to /2025-06-07 and the downloads who will happens at 24:01 or latter in the next day will still on /2025-06-07 directory. 2. The day’s directory is immediately created at the moment when Qutebrowser starts, not when a download happens. So it could open multiple empty folders for the times I just opened Qutebrowser without asking any download. #### The question Is it possible to run setDownloadDirectory() only when some events are triggers like when a download is asked? It will ensure that the date is really the moment when the download happens and also create a directory only if needed.
fauve (1529 rep)
Jun 7, 2025, 02:09 PM
30 votes
9 answers
46563 views
command-line tool for a single download of a torrent (like wget or curl)
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it). Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like `transmission-remote`)....
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it). Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like transmission-remote). But I'm looking for the simplicity of wget or curl: give one command, get the result after a while.
imz -- Ivan Zakharyaschev (15862 rep)
May 5, 2015, 12:01 PM • Last activity: May 18, 2025, 09:24 PM
0 votes
2 answers
2188 views
Downloading embedded video
For the 1st time I had switched to using gnu/linux. I'm using linux-mint 20. I have a problem with downloading online videos from the websites now because idm is not available for linux. When I was using windows, I needed to just log in to this website,there you go, baam, video download link appeare...
For the 1st time I had switched to using gnu/linux. I'm using linux-mint 20. I have a problem with downloading online videos from the websites now because idm is not available for linux. When I was using windows, I needed to just log in to this website,there you go, baam, video download link appeared from idm. Since I am using linux now, I had installed XDM but it didn't automatically capture the video from this website. Is there any software that works just like idm ??? What can I do to download this video??
Jims-Rc (1 rep)
Jan 12, 2021, 02:07 PM • Last activity: Apr 19, 2025, 11:04 AM
0 votes
1 answers
584 views
WGet fails to download with an IPv6 address
When I try to download something with WGet, usually it fails if the domain name is translated to an IPv6 address. That happens more often when there's a link redirection. For example: $ wget --inet6-only https://raw.githubusercontent.com/walkxcode/Dashboard-Icons/main/png/ebay.png --2022-11-13 10:27...
When I try to download something with WGet, usually it fails if the domain name is translated to an IPv6 address. That happens more often when there's a link redirection. For example: $ wget --inet6-only Image --2022-11-13 10:27:05-- Image Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8001::154, 2606:50c0:8000::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8001::154|:443... --- $ wget --inet6-only https://download.docker.com/linux/centos/9/x86_64/stable/Packages/docker-ce-20.10.21-3.el9.x86_64.rpm --2022-11-13 10:32:52-- https://download.docker.com/linux/centos/9/x86_64/stable/Packages/docker-ce-20.10.21-3.el9.x86_64.rpm Resolving download.docker.com (download.docker.com)... 2600:9000:21ed:e000:3:db06:4200:93a1, 2600:9000:21ed:2e00:3:db06:4200:93a1, 2600:9000:21ed:7000:3:db06:4200:93a1, ... Connecting to download.docker.com (download.docker.com)|2600:9000:21ed:e000:3:db06:4200:93a1|:443... In the first example, the raw.githubusercontent.com name is resolved to 2606:50c0:8001::154, 2606:50c0:8000::154, 2606:50c0:8003::154. Then, WGget tries to connect to this IP and nothing else happens, just freezes. No download is made. The same happens to the second example, download.docker.com. When I force an IPv4 connection, it downloads the content successfully. $ wget --inet4-only Image --2022-11-13 10:28:16-- Image Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 24872 (24K) [image/png] Saving to: ‘ebay.png’ ebay.png 100%[==================================================================================>] 24.29K --.-KB/s in 0.005s 2022-11-13 10:28:16 (4.48 MB/s) - ‘ebay.png’ saved [24872/24872] --- $ wget --inet4-only https://download.docker.com/linux/centos/9/x86_64/stable/Packages/docker-ce-20.10.21-3.el9.x86_64.rpm --2022-11-13 10:34:35-- https://download.docker.com/linux/centos/9/x86_64/stable/Packages/docker-ce-20.10.21-3.el9.x86_64.rpm Resolving download.docker.com (download.docker.com)... 52.84.83.27, 52.84.83.65, 52.84.83.79, ... Connecting to download.docker.com (download.docker.com)|52.84.83.27|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 21654080 (21M) [binary/octet-stream] Saving to: ‘docker-ce-20.10.21-3.el9.x86_64.rpm’ docker-ce-20.10.21-3.el9.x86_64.rpm 100%[==================================================================================>] 20.65M 27.3MB/s in 0.8s 2022-11-13 10:34:36 (27.3 MB/s) - ‘docker-ce-20.10.21-3.el9.x86_64.rpm’ saved [21654080/21654080] Why is WGet failing with IPv6?
markfree (425 rep)
Nov 13, 2022, 01:45 PM • Last activity: Feb 1, 2025, 11:40 AM
2 votes
1 answers
81 views
How to check consistency of a generated web site using recursive HTML parsing
I have a FOSS project whose web site is generated by `asciidoc` and some custom scripts as an horde (thousands) of static files locally in the source files' repo, copied into another workspace and uploaded to github.io style repository, and eventually is rendered as an HTTP server for browsers aroun...
I have a FOSS project whose web site is generated by asciidoc and some custom scripts as an horde (thousands) of static files locally in the source files' repo, copied into another workspace and uploaded to github.io style repository, and eventually is rendered as an HTTP server for browsers around the world to see. Users occasionally report that some of the links between site pages end up broken (lead nowhere). The website build platform is generally POSIX-ish, although most often the agent doing the regular work is a Debian/Linux one. *Maybe* the platform differences cause the "page outages"; maybe this bug is platform-independent. I had a thought about crafting a check for the two local directories as well as the resulting site to crawl all relative links (and/or absolute ones starting with its domain name(s)), and report any broken pages so I could focus on finding why they fail and/or avoiding publication of "bad" iterations - same as with compilers, debuggers and warnings elsewhere. The general train of thought is about using some wget spider mode, though any other command-line tool (curl, lynx...), python script, shell with sed, etc. would do as well. Surely this particular wheel has been invented too many times for me to even think about making my own? A quick and cursory googling session while on commute did not come up with any good fit however. So, suggestions are welcome :)
Jim Klimov (131 rep)
May 7, 2024, 07:04 AM • Last activity: Jan 12, 2025, 09:01 AM
0 votes
0 answers
25 views
Linux deb package download with dependencies
I know that the apt command also downloads dependencies of dependencies. Is there a way to make it work like this, but only download without installing? Below are the commands I tried. `apt-cache depends libpango1.0-dev | grep "Depends" | awk '{print $2}' |xargs apt-get download` How can I get the d...
I know that the apt command also downloads dependencies of dependencies. Is there a way to make it work like this, but only download without installing? Below are the commands I tried. apt-cache depends libpango1.0-dev | grep "Depends" | awk '{print $2}' |xargs apt-get download How can I get the dependencies and target packages and download them? https://unix.stackexchange.com/questions/408346/how-to-download-package-not-install-it-with-apt-get-command I run command sudo apt reinstall --download-only libpango1.0-dev And Error is show like picture. Above link is just install only pacakage. not include dependincies. enter image description here Thanks.
CW_LEE (1 rep)
Nov 26, 2024, 01:30 AM • Last activity: Dec 2, 2024, 01:31 AM
0 votes
1 answers
166 views
Download full web page and save without a deep directory structure? Also, bypass paywall?
So, I want to be able to download a web page in a way similar to what https://archive.is does. Using `wget -p -E -k` usually produces a decent result - but this result is somewhat hard to handle. For example, after `wget -p -E -k https://news.sky.com/story/yazidi-woman-kidnapped-by-islamic-state-fre...
So, I want to be able to download a web page in a way similar to what https://archive.is does. Using wget -p -E -k usually produces a decent result - but this result is somewhat hard to handle. For example, after wget -p -E -k https://news.sky.com/story/yazidi-woman-kidnapped-by-islamic-state-freed-from-gaza-after-decade-in-captivity-13227540 I got a directory names news.sky.com and the page was available as news.sky.com/story/yazidi-woman-kidnapped-by-islamic-state-freed-from-gaza-after-decade-in-captivity-13227540.html while other necessary files for the page were scattered around in this same news.sky.com directory. I'd prefer to have something similar to how browsers can "save a page" - the page file in the current directory plus a "something_files" subdirectory where the necessities are. I understand I can kinda do that by moving the site directory structure into that files subdirectory and creating a redirect page next to it, but I'd prefer to do it properly if possible. There are also cases pf paywalls that archive.is successfully bypasses but wget -p -E -k does not. For example, with https://www.nytimes.com/2014/10/28/magazine/theo-padnos-american-journalist-on-being-kidnapped-tortured-and-released-in-syria.html , archive.is produced a perfect paywall-less copy, while wget -p -E -k produced the start of the article hanging on "verifying access". I'd like to be doing what archive.is does. Advice on how to change these things would be much appreciated.
Mikhail Ramendik (538 rep)
Oct 14, 2024, 02:02 PM • Last activity: Nov 8, 2024, 04:59 PM
22 votes
4 answers
28029 views
Resume an aria2 downloaded file by its *.aria2 file
I have a partially downloaded file with aria2. Next to it, is a file with the same name end with `.aria2`. I don't know the download link. I only have these two files. I want to know how could I resume download in this situation. Note: `*.aria2` is created along side the download file and remains un...
I have a partially downloaded file with aria2. Next to it, is a file with the same name end with .aria2. I don't know the download link. I only have these two files. I want to know how could I resume download in this situation. Note: *.aria2 is created along side the download file and remains until the download finishes.
r004 (3549 rep)
Jul 11, 2014, 03:48 PM • Last activity: Oct 29, 2024, 08:26 AM
1 votes
1 answers
958 views
Debian script to download source of installed packages fails
I used the next script from askububtu to automate the download of all installed packages in a fresh debian 9.3 LXDE installation. [From here:][1] #!/bin/bash dpkg --get-selections | while read line do package=`echo $line | awk '{print $1}'` mkdir $package cd $package apt-get -q source $package cd .....
I used the next script from askububtu to automate the download of all installed packages in a fresh debian 9.3 LXDE installation. From here: #!/bin/bash dpkg --get-selections | while read line do package=echo $line | awk '{print $1}' mkdir $package cd $package apt-get -q source $package cd .. done My problem is that I get some errors and it downloads a similar but not the wanted package: > sh: 1: dpkg-source: not found > W: Download is performed unsandboxed as root as file > 'libreoffice_5.2.7-1.dsc' couldn't be accessed by user '_apt'. - > pkgAcquire::Run (13: Permission denied) E: Unpack command 'dpkg-source > --no-check -x libreoffice_5.2.7-1.dsc' failed. Reading package lists... Picking 'libreoffice' as source package instead of > 'libreoffice-calc' You can imagine that it downloads 300MB or so every 3-4 minutes (libreoffice) for many times (for almost every dependency of libreoffice)... Does anyone has a better suggestion than that script to automate the source download of the packages used on my system?
koleygr (355 rep)
Feb 10, 2018, 08:02 PM • Last activity: May 22, 2024, 09:09 AM
0 votes
1 answers
60 views
Downloading files from url that return save dialog box
I have trying download files form a url that return save dialog box. I am using wget. It is not working! I am using the follow command `wget -ci LinkDownloadHw.csv` Bellow are a sample of links used in LinkDownloadHw.csv. It is pluviometer datas of Hidroweb, a plataform of National Water Agency in B...
I have trying download files form a url that return save dialog box. I am using wget. It is not working! I am using the follow command wget -ci LinkDownloadHw.csv Bellow are a sample of links used in LinkDownloadHw.csv. It is pluviometer datas of Hidroweb, a plataform of National Water Agency in Brazil. > "http://www.snirh.gov.br/hidroweb/rest/api/documento/convencionais?tipo=3&documentos=2244030 " "http://www.snirh.gov.br/hidroweb/rest/api/documento/convencionais?tipo=3&documentos=2244054 " "http://www.snirh.gov.br/hidroweb/rest/api/documento/convencionais?tipo=3&documentos=2244088 " "http://www.snirh.gov.br/hidroweb/rest/api/documento/convencionais?tipo=3&documentos=2244090 " It returned messages of Invalid URL and Scheme Missing. So, I have two question. First, How can I download files form url that return save dialog box? I accept other tools besides wget. Second, Considering wget, how save files using different names as like as the last seven numbers in url? Thanks for any help!
THIAGO CARLOS LOPES RIBEIRO (1 rep)
May 2, 2024, 11:50 PM • Last activity: May 3, 2024, 06:29 AM
-3 votes
1 answers
181 views
Download File Which has the latest Time Stamp From Website
### Website is like this [![enter image description here][1]][1] Can't add the actual site as it's from work. These files don't have version numbers, but have different names. No latest file link either, it's a very minimal website. You can only tell which is the latest by its timestamp. #### Ideas...
### Website is like this enter image description here Can't add the actual site as it's from work. These files don't have version numbers, but have different names. No latest file link either, it's a very minimal website. You can only tell which is the latest by its timestamp. #### Ideas - curl the webpage, add the files to array, then download the last one in the index with curl - curl the webpage, sort each file by its upload date then subtract the upload date from the current date then have curl download the file which has the smallest time difference? I'd use date to convert to seconds. Open to a bash solution. **any better ideas?**
Nickotine (554 rep)
Feb 22, 2024, 11:44 PM • Last activity: Feb 26, 2024, 12:44 PM
2 votes
2 answers
4599 views
FTP: get the latest file in server
I have an ftp server running, and it irregularly generates the latest file. The file is store as: `Home`->`T22:30:10`->`new.txt`, and the latest one would be (a new folder) `Home`->`T23:10:25`->`new.txt` (note that this is a new folder with latest time) I need to implement something (it could be any...
I have an ftp server running, and it irregularly generates the latest file. The file is store as: Home->T22:30:10->new.txt, and the latest one would be (a new folder) Home->T23:10:25->new.txt (note that this is a new folder with latest time) I need to implement something (it could be anything, C code, Bash script, etc.) in a Linux machine that pulls the latest file over. I have looked into two options: 1. Use libcurl, pass the directory listing, and select the latest file. This is really annoying ass and time-consuming, and I still can't find a easy way to do this. 2. Use lftp, at initiazation, remove all the files in the server, so that each time when I call lftp to download something, it would be the latest one. (This method is only conceptual and I haven't tried it in real life). Is there any easier option?
andy_ttse (707 rep)
Mar 25, 2015, 04:21 PM • Last activity: Jan 30, 2024, 11:28 PM
-5 votes
1 answers
71 views
virtual machine stuck
[![enter image description here][1]][1] [1]: https://i.sstatic.net/brKXK.png My virtual machine stuck on this and just keep downloadin for hours please solve
enter image description here My virtual machine stuck on this and just keep downloadin for hours please solve
muhammad wajahat ali khan (1 rep)
Jan 23, 2024, 11:44 AM • Last activity: Jan 23, 2024, 12:16 PM
1 votes
1 answers
2021 views
Continuing an interrupted `wget` session?
Is it possible to continue an interrupted `wget` session - eg. by parsing the log-file (created with -o or -a ), or after somehow have `wget` store additional information to disk (like it's list over parsed and pending links)? I know the -N option allows `wget` to pick-up where it left as long as th...
Is it possible to continue an interrupted wget session - eg. by parsing the log-file (created with -o or -a ), or after somehow have wget store additional information to disk (like it's list over parsed and pending links)? I know the -N option allows wget to pick-up where it left as long as the server support size and date listing, but the site I was downloading had mostly PHP-generated content, so I don't think -N will work. I don't expect to continue what I started, but if it's at all possible, I'd like to turn-on anything that will help before retrying, in case I get interrupted again. +++ I also ran into an additional problem... I got lots of "ERROR 400: Bad Request"... I assume that means I got a bit *too* eager, so the server blocked me and/or the database got overburdened for a while. Anyway, would it be possible to recover from that too? Make wget basically continue where it left off (after parsing log or link-list or whatever), but also redo pages where it ran into trouble (eg. after I first edited the list).
Baard Kopperud (7253 rep)
Aug 28, 2015, 01:47 PM • Last activity: Jan 19, 2024, 02:06 PM
0 votes
2 answers
369 views
How can I download a very large list of URLs so that the downloaded files are split into subfolders containing the first letter of the filenames?
I want to download many files (> tens of millions). I have the URL for each file. I have the list of URLs in a file `URLs.txt`: ``` http://mydomain.com/0wd.pdf http://mydomain.com/asz.pdf http://mydomain.com/axz.pdf http://mydomain.com/b00.pdf http://mydomain.com/bb0.pdf etc. ``` I can download them...
I want to download many files (> tens of millions). I have the URL for each file. I have the list of URLs in a file URLs.txt:
http://mydomain.com/0wd.pdf 
http://mydomain.com/asz.pdf 
http://mydomain.com/axz.pdf 
http://mydomain.com/b00.pdf 
http://mydomain.com/bb0.pdf 
etc.
I can download them via wget -i URLs.txt, however it would go over the [maximum](https://stackoverflow.com/a/466596/395857) number of files that can be placed in one folder. How can I download this large list of URLs so that the downloaded files are split into subfolders containing the first letter of the filenames? E.g.,:
0/0wd.pdf
a/asz.pdf
a/axz.pdf
b/b00.pdf
b/bb0.pdf
etc.
If that matters, I use Ubuntu.
Franck Dernoncourt (5533 rep)
Dec 21, 2023, 10:22 PM • Last activity: Dec 24, 2023, 12:15 AM
0 votes
0 answers
962 views
dnf repoquery --best?
Is there a `dnf repoquery --requires --resolve` option that makes it pick among dependency options the same as `dnf install` automatically does? --- ## Problem I need to download an RPM and all its recursive dependencies for offline install, using a system that doesn't have `sudo` permissions and do...
Is there a dnf repoquery --requires --resolve option that makes it pick among dependency options the same as dnf install automatically does? --- ## Problem I need to download an RPM and all its recursive dependencies for offline install, using a system that doesn't have sudo permissions and doesn't have exactly the same existing packages installed on it. This means I can't simply use dnf install --downloadonly --destdir since it both requires sudo permissions and will only download the dependencies that aren't present on the current system (my target system has a more limited list of what's currently installed). So I have to recursively resolve all dependencies of a package manually, then pass that list of NEVRA format packages to the dnf download command. However, I've found that when you use dnf repoquery --requires --resolve {package}, it will list every possible package that can provide the direct requirements and not just the "best" one like dnf install {package} automatically picks. ## Example of problem For example, dnf repoquery --requires --resolve curl lists both libcurl and libcurl-minimal, which are conflicting packages and are expected to be options you choose between to satisfy the dependency. Tracing this, dnf repoquery --deplist curl lists that requirement libcurl >= {version} can be satisifed by a few versions each of libcurl-{version} and libcurl-minimal-{version}. This matches with the idea that libcurl and libcurl-minimal are alternative options to one another. Compare this to when I run dnf install curl, and it just automatically picks the "best" libcurl dependency and chooses libcurl-{version}. ## Question So since dnf install {package} somehow automatically picks which single provider of a requirement to install, how do I query to determine what that would be when I seemingly can't actually run dnf install directly due to sudo permission limits and needing to include dependencies that are already installed on the system I'm performing the download from? --- I am also aware of the dnf download --resolve --alldeps (from dnf core plugins) that will resolve dependencies the same was as dnf install when resolving the requires, however I also need to prune dependencies and their branches for some things like basesystem and that's not possible with dnf download.
mtalexan (316 rep)
Dec 20, 2023, 05:26 PM • Last activity: Dec 21, 2023, 03:35 PM
10 votes
3 answers
27103 views
How to move completed torrent downloads to another folder without breaking the torrent link?
I'm using Transmission on EOs and I've downloaded a bunch of torrent files which ended up my Download folder in my /home. I'd like now to move the files to my external hard drive without breaking the link and contribute to further sharing upload of these files. What should I do?
I'm using Transmission on EOs and I've downloaded a bunch of torrent files which ended up my Download folder in my /home. I'd like now to move the files to my external hard drive without breaking the link and contribute to further sharing upload of these files. What should I do?
neydroydrec (3887 rep)
Dec 23, 2014, 02:12 PM • Last activity: Dec 6, 2023, 10:38 AM
1 votes
3 answers
1047 views
wget — download multiple files over multiple nodes on a cluster
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once. Given this situation, I would like to know whether I can use `wget` from a...
Hi there I'm trying to download a large number of files at once; 279 to be precise. These are large BAM (~90GB) each. The cluster where I'm working has several nodes and fortunately I can allocate multiple instances at once. Given this situation, I would like to know whether I can use wget from a batch file (*see* example below) to assign each download to a separate node to carry out independently. **batch_file.txt**
-O DNK07.bam
 -O mixe0007.bam
 -O IHW9118.bam
.
.
In principle, this will not only speed up things but also prevent the run from failing since the wall-time for this execution is 24h, and it won't be enough to download all those files on a single machine consecutively. This is what my BASH script looks like:
#!/bin/bash
#
#SBATCH --nodes=279 --ntasks=1 --cpus-per-task=1
#SBATCH --time=24:00:00
#SBATCH --mem=10gb
#
#SBATCH --job-name=download
#SBATCH --output=sgdp.out
##SBATCH --array=[1-279]%279
#
#SBATCH --partition=
#SBATCH --qos=
#
#SBATCH --account=

#NAMES=$1
#d=$(sed -n "$SLURM_ARRAY_TASK_ID"p $NAMES)

wget -i sgdp-download-list.txt
As you can see I was thinking to use an array job (not sure whether will work); alternatively, I thought about allocating 279 nodes hoping SLURM would haven been clever enough to send each download to a separate node (not sure about it...). If you are aware of a way to do so efficiently, any suggestion is welcome. Thanks in advance!
Matteo (209 rep)
Oct 23, 2023, 11:48 AM • Last activity: Dec 4, 2023, 06:36 PM
0 votes
1 answers
120 views
Dolphin service menu to run konsole command with copied data in working directory
I would like to combine these two elements that I already use: - Konsole terminal running a `yt-dlp` command in order to download from a copied link (like `konsole -e yt-dlp 8 %u`) - A Dolphin service menu to open Konsole at a specific location (`konsole --workdir %f`) I would like a Dolphin service...
I would like to combine these two elements that I already use: - Konsole terminal running a yt-dlp command in order to download from a copied link (like konsole -e yt-dlp 8 %u) - A Dolphin service menu to open Konsole at a specific location (konsole --workdir %f) I would like a Dolphin service menu to open Konsole at a specific location AND run the yt-dlp command. enter image description here I need that when I quickly want to run that command on a USB-stick or other removable device, without the need to manually copy its location or of copying the file there after download, etc. How to combine the %f and/or %u parameters here? After --workdir they are referring to the directory that is open in Dolphin, and after yt-dlp they are referring to the copied download link, so that a line like Exec=konsole --workdir -e yt-dlp 8 %u doesn't work.
cipricus (1779 rep)
Nov 27, 2023, 10:09 AM • Last activity: Nov 27, 2023, 10:22 AM
Showing page 1 of 20 total questions