Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
0
votes
0
answers
41
views
Connect to docker container through host via ssh without RemoteCommand
I have a server that runs multiple docker containers. I can access my server via SSH, and have set up my `ssh_config` to allow me to ssh into certain containers that I regularly access: Host some_container HostName my.server.com RemoteCommand docker compose -f /docker-compose.yml exec some_container...
I have a server that runs multiple docker containers. I can access my server via SSH, and have set up my
ssh_config
to allow me to ssh into certain containers that I regularly access:
Host some_container
HostName my.server.com
RemoteCommand docker compose -f /docker-compose.yml exec some_container fish
RequestTTY force
However, I now need to use a particular piece of software that uses ssh, and access my containers with it. This software sets the ssh command argument. Using the above configuration, this causes ssh to error out with Cannot execute command-line and remote command
, due to the presence of RemoteCommand
.
I do NOT want to have to run an sshd server inside the container.
I have attempted to replace RemoteCommand
with ProxyCommand
, but this reusults in me connecting to my server and the docker command being ignored:
ProxyCommand ssh %h -W %h:%p \
-o "RequestTTY=force" \
-o "SessionType=default" \
-o "RemoteCommand=docker compose -f /docker-compose.yml exec some_container fish"
(note that this is all one line in my ssh_config
- I have split it up here to make it easier to read).
Is there any way to ssh into my docker container without running sshd
in the container or using RemoteCommand
?
Gunnar Knutson
(1 rep)
Jul 2, 2025, 08:23 PM
• Last activity: Jul 3, 2025, 03:46 AM
3
votes
1
answers
1585
views
In Podman, how to disable "Executing external compose provider" message when using "podman compose"?
I installed Podman Desktop app v. 1.18.1 on macOS Sequoia. If I execute: ```bash podman compose version ``` … I get this on the console: > \>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please see podman-compose(1) for how to disable this message. >Docker Compose version...
I installed Podman Desktop app v. 1.18.1 on macOS Sequoia.
If I execute:
podman compose version
… I get this on the console:
> \>>>> Executing external compose provider "/usr/local/bin/docker-compose". Please see podman-compose(1) for how to disable this message.
>Docker Compose version v2.36.0
I am guessing that Podman is using an implementation of *Docker Compose* rather than its own implementation. (Does Podman even have its own implementation of the *Compose* spec ?)
That message seems to suggest I should call a command podman-compose
. But there is no such command. Running this:
which podman-compose
… results in :
>podman-compose not found
Obviously there is a podman compose
command+subcommand. I used that above for version
. But the message says podman-compose
with a hyphen, a command that does not seem to exist.
So how do I disable that message?
Basil Bourque
(1671 rep)
May 18, 2025, 08:10 PM
• Last activity: May 26, 2025, 11:20 AM
2
votes
1
answers
85
views
docker can't access to some tagged images
I'm facing with the following error using `docker` on my Linux Ubuntu system. ubuntu@ubuntu:~$ docker version Client: Docker Engine - Community Version: 28.0.1 API version: 1.48 Go version: go1.23.6 Git commit: 068a01e Built: Wed Feb 26 10:41:08 2025 OS/Arch: linux/amd64 Context: default ubuntu@ubun...
I'm facing with the following error using
docker
on my Linux Ubuntu system.
ubuntu@ubuntu:~$ docker version
Client: Docker Engine - Community
Version: 28.0.1
API version: 1.48
Go version: go1.23.6
Git commit: 068a01e
Built: Wed Feb 26 10:41:08 2025
OS/Arch: linux/amd64
Context: default
ubuntu@ubuntu:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
yangsuite-one-container latest 604c66275985 7 weeks ago 2.7GB
ubuntu 22.04 a24be041d957 3 months ago 77.9MB
carlo-ubuntu latest 1245a5e87e81 5 months ago 608MB
ubuntu_iperf latest 072ecc269646 7 months ago 387MB
debian latest 617f2e89852e 7 months ago 117MB
ubuntu 20.04 6013ae1a63c2 7 months ago 72.8MB
networkstatic/iperf3 latest 377c80503c6d 12 months ago 82MB
ios-xr/xrd-control-plane 7.11.2 58b64211c345 14 months ago 1.29GB
ubuntu 18.04 f9a80a55f492 24 months ago 63.2MB
ubuntu@ubuntu:~$
ubuntu@ubuntu:~$ docker inspect ios-xr/xrd-control-plane
[]
Error: No such object: ios-xr/xrd-control-plane
ubuntu@ubuntu:~$
ubuntu@ubuntu:~$ docker inspect "ios-xr/xrd-control-plane"
[]
Error: No such object: ios-xr/xrd-control-plane
ubuntu@ubuntu:~$
As you can see, when I try to access/inspect the tagged image ios-xr/xrd-control-plane
I get an error.
Which is actually the problem ?
CarloC
(385 rep)
May 24, 2025, 01:09 PM
• Last activity: May 24, 2025, 05:00 PM
0
votes
1
answers
28
views
Run a VPN server alongside a website served by Docker
I have a server running this [CMS](https://github.com/mediacms-io/mediacms) as a website by running a Docker file like [this](https://github.com/mediacms-io/mediacms/blob/main/docker-compose-letsencrypt.yaml) by `docker-compose` which internally uses `nginxproxy/nginx-proxy` and `nginxproxy/acme-com...
I have a server running this [CMS](https://github.com/mediacms-io/mediacms) as a website by running a Docker file like [this](https://github.com/mediacms-io/mediacms/blob/main/docker-compose-letsencrypt.yaml) by
docker-compose
which internally uses nginxproxy/nginx-proxy
and nginxproxy/acme-companion
Docker images.
Now, I intend to follow the instructions given [here](https://www.euro-space.net/blog/virtual-server-for-vpn/tutorial/how-to-setup-ikev2-vpn-using-strongswan-and-letsencrypt-on-centos-8.php) to set up a VPN on the same server which uses Let's Encrypt
.
# Question
Can I run the VPN alongside the previous CMS? Would I run into any trouble?
Megidd
(1579 rep)
Oct 21, 2024, 10:10 AM
• Last activity: May 14, 2025, 02:00 PM
5
votes
1
answers
4128
views
How to view the full docker compose build log? daemon.json is ignored
I want to debug a failing `docker compose build --no-cache` (it succeeds with the cache), however due to the log limit, I cannot see the reason for the failure, only the message `output clipped, log limit 1MiB reached`. Following the Docker documentation, I created the following `/etc/docker/daemon....
I want to debug a failing
docker compose build --no-cache
(it succeeds with the cache), however due to the log limit, I cannot see the reason for the failure, only the message output clipped, log limit 1MiB reached
.
Following the Docker documentation, I created the following /etc/docker/daemon.json
:
{
"features": { "buildkit": true },
"log-driver": "local",
"log-opts":
{
"max-size": "100m",
"max-file": "3"
}
}
After systemctl docker restart
, docker info
shows:
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
compose: Docker Compose (Docker Inc., 2.6.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: local
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1.m
runc version:
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 5.18.5-arch1-1
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.41GiB
Name: konradspc
ID: 6NFV:RQP7:V6XK:7D6Z:W2LC:LPGR:HQBQ:V55P:BECL:WXPP:YPC5:2QQ2
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Notably, the logging driver is now shown as "local", where before it was "json-file", so it seems to me as if Docker is successfully loading /etc/docker/daemon.json
.
However when I execute docker compose build --no-cache
again, I still get the message output clipped, log limit 1MiB reached
.
How do I get docker compose
to not ignore daemon.json
or is there any other way to access the full build log?
I am using Docker version 20.10.17, build 100c70180f, Docker Compose version 2.6.0 with BuildKit on Arch Linux with Kernel 5.18.5.
BuildKit cannot be deactivated because the build uses --mount=type=cache,target=/root/.m2
to cache Maven dependencies.
Konrad Höffner
(1028 rep)
Jun 22, 2022, 09:44 AM
• Last activity: Apr 19, 2025, 06:01 PM
0
votes
1
answers
38
views
Starting a systemd user service for a docker rootlesskit user
I run my docker stuff with a dedicated user, and installed the docker rootlesskit. I start docker with `systemctl --user start docker.service`. Everything related to docker, executed with that user, works. I am now installing a nostr relay. I followed the instructions and the thing actually works. B...
I run my docker stuff with a dedicated user, and installed the docker rootlesskit. I start docker with
systemctl --user start docker.service
. Everything related to docker, executed with that user, works.
I am now installing a nostr relay. I followed the instructions and the thing actually works.
But.
I have to run it via tmux
and then scripts/start_local
. This script basically runs docker compose -f $DOCKER_FILE up -d
.
I tried setting up a systemd
script for that. First, I tried the /etc/systemd/system/
location, with the correct USER
variable and working directory. However, it would fail to start with the famous Cannot connect to the Docker daemon at unix:///home/me/.docker/run/docker.sock. Is the docker daemon running?
error message.
So I thought maybe it's because I should run the systemd script as the user. So I installed the script into $HOME/.config/systemd/user
with a symlink to $HOME/.config/systemd/user/default.target.wants
. Basically, the same way my docker service runs.
To my surprise, I get the same error.
Going back to running it in tmux, it just works.
What is different here?
For clarity, here is the service file I am using:
[Unit]
Description=Nostr TS Relay
[Service]
Type=simple
Restart=always
RestartSec=5
WorkingDirectory=/home/me/nostream
ExecStart=/home/me/nostream/scripts/start_local
ExecStop=/home/me/nostream/scripts/stop
[Install]
WantedBy=default.target
Note: it's not a big deal I can have it running with a tmux, but I'd prefer the systemd variant.
unsafe_where_true
(333 rep)
Mar 27, 2025, 05:33 PM
• Last activity: Mar 27, 2025, 05:44 PM
0
votes
1
answers
826
views
How to Configure Cgroup V2 limits on docker-compose containers
I want to configure cgroups V2 resource limitation on a Docker-Compose container. How do I do this?
I want to configure cgroups V2 resource limitation on a Docker-Compose container. How do I do this?
F1Linux
(2744 rep)
Jan 14, 2025, 10:48 AM
• Last activity: Jan 14, 2025, 08:17 PM
0
votes
1
answers
36
views
How to send messages from the serviceA running in a docker container to the serviceB running on the host?
I hava a senrio as follow: on basic.target stage. My host start running a serviceB, which create a unix socket file "/tmp/.test_sock" when service started. on the multi-user.target stage. The docker container "test_dc" starting created. what can i do, to create a volume in the docker test_dc, which...
I hava a senrio as follow:
on basic.target stage. My host start running a serviceB, which create a unix socket file "/tmp/.test_sock" when service started.
on the multi-user.target stage. The docker container "test_dc" starting created.
what can i do, to create a volume in the docker test_dc, which is the mount of "/tmp/.test_sock"?
user688442
(1 rep)
Dec 16, 2024, 09:50 AM
• Last activity: Jan 12, 2025, 12:52 PM
1
votes
1
answers
59
views
how to have vpn traffic routed to pihole
i have a pihole server running in docker compose on my Debian Linux server. i also host a wireguard vpn (also in docker compose) running on the same server. by using the tcpdump command i have confirmed that all traffic that happens on my laptop is routed to the Debian server. my only issue at this...
i have a pihole server running in docker compose on my Debian Linux server. i also host a wireguard vpn (also in docker compose) running on the same server. by using the tcpdump command i have confirmed that all traffic that happens on my laptop is routed to the Debian server. my only issue at this stage is pihole ad blocking doesn't seem to work while using it which is really important to me in some circumstances. is there anyway to have my wireguard vpn traffic use my pihole dns server?
Ravi
(11 rep)
Dec 21, 2024, 01:09 AM
• Last activity: Jan 12, 2025, 12:46 PM
0
votes
0
answers
58
views
How to connect volumes running in docker for Owncloud to the host's vm folders?
I am trying to configure Owncloud in docker. The docker-compose yaml I am using look like this: ```yaml services: owncloud: image: owncloud/server:10.15 container_name: owncloud_server restart: always ports: - "8084:8080" depends_on: - mariadb - redis environment: - OWNCLOUD_DOMAIN=https://owncloud....
I am trying to configure Owncloud in docker. The docker-compose yaml I am using look like this:
services:
owncloud:
image: owncloud/server:10.15
container_name: owncloud_server
restart: always
ports:
- "8084:8080"
depends_on:
- mariadb
- redis
environment:
- OWNCLOUD_DOMAIN=https://owncloud.example.com
- OWNCLOUD_TRUSTED_DOMAINS=owncloud.example.com
- OWNCLOUD_DB_TYPE=mysql
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USERNAME=owncloud
- OWNCLOUD_DB_PASSWORD=owncloud
- OWNCLOUD_DB_HOST=mariadb
- OWNCLOUD_ADMIN_USERNAME=admin
- OWNCLOUD_ADMIN_PASSWORD=admin
- OWNCLOUD_MYSQL_UTF8MB4=true
- OWNCLOUD_REDIS_ENABLED=true
- OWNCLOUD_REDIS_HOST=redis
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- /home/ubuntu/oci-owncloud-config:/var/www/config
- /home/ubuntu/oci-owncloud-data:/mnt/data
mariadb:
image: mariadb:10.11
container_name: owncloud_mariadb
restart: always
environment:
- MYSQL_ROOT_PASSWORD=owncloud
- MYSQL_USER=owncloud
- MYSQL_PASSWORD=owncloud
- MYSQL_DATABASE=owncloud
- MARIADB_AUTO_UPGRADE=1
command: ["--max-allowed-packet=128M", "--innodb-log-file-size=64M"]
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-u", "root", "--password=owncloud"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- /home/ubuntu/oci-owncloud-db:/var/lib/mysql
redis:
image: redis:6
container_name: owncloud_redis
restart: always
command: ["--databases", "1"]
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- /home/ubuntu/oci-owncloud-redis:/data
When I run it, I get three containers (redis, mariadb, and owncloud). The folders in the host vm are connected to my OCI Object Storage with s3fs. The issue is on the folder user and permissions. What users and permissions should I use with Owncloud in order to connect them to my host vm's folders?
I tried:
sudo chown -R root:root /home/ubuntu/oci-owncloud-(name)
sudo chmod -R 755 /home/ubuntu/oci-owncloud-(name)
but it doesn't work.
Fotios Tragopoulos
(101 rep)
Dec 2, 2024, 09:46 AM
• Last activity: Dec 2, 2024, 10:22 AM
0
votes
0
answers
28
views
Loading volume into docker container
I am using Ubuntu on my host machine, and I have a docker container also running Ubuntu that contains an ASP .NET website. Now the issue is I can't seem to figure out how to get the container to mount my SSL keys from my host machine. My docker-compose.yml file has the following volumes specified. `...
I am using Ubuntu on my host machine, and I have a docker container also running Ubuntu that contains an ASP .NET website. Now the issue is I can't seem to figure out how to get the container to mount my SSL keys from my host machine. My docker-compose.yml file has the following volumes specified.
volumes:
- /etc/letsencrypt/archive/example.com/fullchain.pem:/etc/ssl/certs/fullchain.pem:ro
- /etc/letsencrypt/archive/example.com/privkey.pem:/etc/ssl/private/privkey.pem:ro
- app-data:/app/data
- app-data:/root/.aspnet/DataProtection-Keys
volumes:
app-data:
I also verified these files exist by using cat /etc/letsencrypt/archive/example.com/privkey1.pem
and cat /etc/letsencrypt/archive/example.com/fullchain1.pem
which all worked perfectly. But when I compose my container, I always get the following errors because it can't seem to find the file.
Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
at Interop.Crypto.CheckValidOpenSslHandle(SafeHandle handle)
at Internal.Cryptography.Pal.OpenSslX509CertificateReader.FromFile(String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(String fileName, String password)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, String fileName, String password)
I also ensured the permissions are right by running the following to no avail.
sudo chmod 644 /etc/letsencrypt/archive/example.com/fullchain1.pem
sudo chmod 600 /etc/letsencrypt/archive/example.com/privkey1.pem
sudo chmod 755 /etc/letsencrypt/archive
sudo chmod 755 /etc/letsencrypt/archive/example.com
Next, I tried manually starting the container, but I get the same error where the container instantly closes due to the exception.
docker run -it --rm \
-v /etc/letsencrypt/archive/example.com/fullchain1.pem:/etc/ssl/certs/fullchain.pem:ro \
-v /etc/letsencrypt/archive/example.com/privkey1.pem:/etc/ssl/private/privkey.pem:ro \
server /bin/bash
Lastly, here is my Program class which is trying to read the HTTPS cert which is what is generating the actual no file found exception.
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup()
.UseKestrel(options =>
{
options.ListenAnyIP(80); // HTTP
options.ListenAnyIP(443, listenOptions =>
{
listenOptions.UseHttps("/etc/ssl/certs/fullchain.pem", "/etc/ssl/private/privkey.pem");
});
});
});
}
UnSure
(1 rep)
Nov 24, 2024, 08:59 PM
• Last activity: Nov 24, 2024, 09:30 PM
0
votes
1
answers
1449
views
Podman and Docker: Sharing a network and/or hostname resolution between services?
So I have a docker network named `home` that all of my root-based (or docker containers that were simply too hard to port to podman) containers live. sudo docker network ls NETWORK ID NAME DRIVER SCOPE 9d5788b45048 bridge bridge local b1f4756feab4 home bridge local 5d7ee6579f19 host host local 8678a...
So I have a docker network named
home
that all of my root-based (or docker containers that were simply too hard to port to podman) containers live.
sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
9d5788b45048 bridge bridge local
b1f4756feab4 home bridge local
5d7ee6579f19 host host local
8678a773e2f2 none null local
And, for podman, I have a very similar configuration.
podman network ls
NETWORK ID NAME DRIVER
8b17ae3d5d67 home bridge
2f259bab93aa podman bridge
The problem? Well, it turns out that my name resolution for containers doesn't work within one network to another. So, for example, I have nginx-proxy-manager
running on podman and I want to redirect http://domain/freshrss
to the freshrss service by specifying freshrss
and the associated port number. This doesn't work, and that makes sense to me as the docker network and the podman network are fundamentally *detached* from one another.
So my question is simple: Is there some way to **treat these two networks as *one* network by bridging them together without hurting the sanctity of individualized network configurations**? Alternatively, is there *some way* for me to get around this communication issue without having to respecify the domain name in the forward name and port? For example, I thought forwarding to 127.0.0.1:
would work as it would go to the host, and then connect to the appropriate port but that didn't work.
docker/podman compose
tailored answers are welcome as that's how I'm configuring my services.
TheYokai
(143 rep)
Nov 22, 2024, 09:38 PM
• Last activity: Nov 22, 2024, 09:59 PM
0
votes
0
answers
55
views
Running vsftpd in Docker (Swarm)
I want to use a vsftp-Server in my Docker Swarm but having some network issues. I have the following compose-file: services: vsftpd: container_name: vsftpd image: million12/vsftpd restart: always volumes: - /whatever:/var/ftp/:ro environment: - ANONYMOUS_ACCESS=true - LOG_STDOUT=true - CUSTOM_PASSIV...
I want to use a vsftp-Server in my Docker Swarm but having some network issues. I have the following compose-file:
services:
vsftpd:
container_name: vsftpd
image: million12/vsftpd
restart: always
volumes:
- /whatever:/var/ftp/:ro
environment:
- ANONYMOUS_ACCESS=true
- LOG_STDOUT=true
- CUSTOM_PASSIVE_ADDRESS=""
ports:
- 20-21:20-21
- 21100-21110:21100-21110
#network_mode: "host"
If i run this compose file with docker compose and network mode "host" then i can easily connect to it.
If i run it without network mode "host" i have the following output:
Status: Angemeldet
Status: Empfange Verzeichnisinhalt...
Befehl: PWD
Antwort: 257 "/" is the current directory
Befehl: TYPE I
Antwort: 200 Switching to Binary mode.
Befehl: PASV
Antwort: 500 OOPS: invalid pasv_address
Befehl: PORT 10,10,10,102,10,187
Fehler: Verbindung vom Server geschlossen
I can see the login in the output of the vsftp-Server:
vsftpd | [VSFTPD 16:46:41] VSFTPD daemon starting
vsftpd | Tue Nov 19 16:46:45 2024 [pid 33] CONNECT: Client ""
vsftpd | Tue Nov 19 16:46:45 2024 [pid 32] [ftp] OK LOGIN: Client "", anon password "anonymous@example.com"
I have the same results if i run it on swarm.
It seems like im having issues with the "passive ports" and the docker network. Unfortunately there is no "network_mode: host" in swarm (even if i pin the service to a host)
Is there a possibility to run vsftpd in passive Mode in Swarm like Compose with Network-Mode ? Or are there other ways to bring up a *working* ftp-Server in Swarm?
swarmer91
(1 rep)
Nov 19, 2024, 05:09 PM
1
votes
1
answers
661
views
How to get docker-compose back in Fedora 41?
After upgrading Fedora 40 to 41, `docker-compose` was no longer available. When I try to re-install with `sudo dnf install docker-compose`, it raises the following conflicts: ``` - installed package docker-compose-plugin-2.29.7-1.fc41.x86_64 conflicts with docker-compose-plugin provided by docker-co...
After upgrading Fedora 40 to 41,
docker-compose
was no longer available.
When I try to re-install with sudo dnf install docker-compose
, it raises the following conflicts:
- installed package docker-compose-plugin-2.29.7-1.fc41.x86_64 conflicts with docker-compose-plugin provided by docker-compose-2.29.7-1.fc41.x86_64 from fedora
- package docker-compose-2.29.7-1.fc41.x86_64 from fedora conflicts with docker-compose-plugin provided by docker-compose-plugin-2.29.2-1.fc41.x86_64 from docker-ce-stable
- package docker-compose-2.29.7-1.fc41.x86_64 from fedora conflicts with docker-compose-plugin provided by docker-compose-plugin-2.29.6-1.fc41.x86_64 from docker-ce-stable
- package docker-compose-2.29.7-1.fc41.x86_64 from fedora conflicts with docker-compose-plugin provided by docker-compose-plugin-2.29.7-1.fc41.x86_64 from docker-ce-stable
- installed package docker-compose-plugin-2.29.7-1.fc41.x86_64 conflicts with docker-compose-plugin provided by docker-compose-2.29.7-3.fc41.x86_64 from updates
- package docker-compose-2.29.7-3.fc41.x86_64 from updates conflicts with docker-compose-plugin provided by docker-compose-plugin-2.29.2-1.fc41.x86_64 from docker-ce-stable
- package docker-compose-2.29.7-3.fc41.x86_64 from updates conflicts with docker-compose-plugin provided by docker-compose-plugin-2.29.6-1.fc41.x86_64 from docker-ce-stable
- package docker-compose-2.29.7-3.fc41.x86_64 from updates conflicts with docker-compose-plugin provided by docker-compose-plugin-2.29.7-1.fc41.x86_64 from docker-ce-stable
Running the command line with --allowerasing
or --skip-broken
does not work either.
I also tried (but to no avail):
- removing/re-installing docker-compose-plugin
- removing/re-installing Docker as per the documentation (https://docs.docker.com/engine/install/fedora)
What more can I try to get docker-compose
work again?
David
(111 rep)
Nov 12, 2024, 04:18 PM
• Last activity: Nov 14, 2024, 02:38 PM
2
votes
0
answers
649
views
How can I get a docker compose container's replica number from inside it, without special tools?
I have a docker compose project where one of the services launches several replicas using the replicas directive. The replicas have automatically enumerated names, which also serve as hostnames. Note, this different to container id. I want to find out what the replica number of a container is from i...
I have a docker compose project where one of the services launches several replicas using the replicas directive. The replicas have automatically enumerated names, which also serve as hostnames. Note, this different to container id.
I want to find out what the replica number of a container is from inside itself, without installing special tools (like dig, etc), or running runtime commands with exec. I'm okay with passing in an environment variable, but I don't know how to access that number in order to make it an environment variable.
The solutions I've seen either involve bloating the container with tools, confuse the concept of the id with the name, or suggest stuff like cat'ing /proc/self/cgroup, which just doesn't work for me.
Duncan Marshall
(651 rep)
Nov 5, 2024, 10:40 AM
0
votes
0
answers
52
views
How to take Docker Desktop full backup and restore
How to take a full backup and restore of Docker Desktop. I see that there is a way to save the Docker image as a tar file in the local filesystem. I was looking to dump all the current images into one single tar file as a backup for the current status of the docker desktop To save a docker image ```...
How to take a full backup and restore of Docker Desktop. I see that there is a way to save the Docker image as a tar file in the local filesystem.
I was looking to dump all the current images into one single tar file as a backup for the current status of the docker desktop
To save a docker image
% docker save image-name | gzip > image-name.tar.gz
I am not aware how to restore it, again this is only one image I have almost ~30 odd images, can I take a backup in a single file
Varadharajan Nadar
(101 rep)
Oct 22, 2024, 10:21 PM
1
votes
1
answers
164
views
Why can't "dig" on rockylinux 9 find a container/host named "https" in a docker compose network?
Sorry I don't know if this is a docker issue or a dig issue on rockylinux 9. Everything works as expected on rockylinux 8. I have a `docker-compose.yml` file below with a service named `https`. That allows the container to be referenced by the hostname `https`. While `ping https` works, for some rea...
Sorry I don't know if this is a docker issue or a dig issue on rockylinux 9. Everything works as expected on rockylinux 8.
I have a
docker-compose.yml
file below with a service named https
. That allows the container to be referenced by the hostname https
. While ping https
works, for some reason dig https
(DiG 9.16.23-RH
) does not work on rockylinux 9. It does work on rockylinux 8 (DiG 9.11.36-RedHat-9.11.36-16.el8_10.2
). If I change the service name to httpsx
then dig httpsx
works.
services:
https:
image: "rockylinux:${RL_VERSION}"
command: bash -c "yum install -y iputils bind-utils && echo '=====dig version output====' && dig -v && echo '=====ping https output====' && ping -c 3 https && echo '=====dig https output====' && dig +short https"
environment:
- RL_VERSION
Working 8:
% RL_VERSION=8 docker-compose up
Attaching to https-1
https-1 | Rocky Linux 8 - AppStream 5.7 MB/s | 11 MB 00:01
...
https-1 | Complete!
https-1 | =====dig version output====
https-1 | DiG 9.11.36-RedHat-9.11.36-16.el8_10.2
https-1 | =====ping https output====
https-1 | PING https (172.21.0.2) 56(84) bytes of data.
https-1 | 64 bytes from c3f0c7a6613c (172.21.0.2): icmp_seq=1 ttl=64 time=0.558 ms
https-1 | 64 bytes from c3f0c7a6613c (172.21.0.2): icmp_seq=2 ttl=64 time=0.051 ms
https-1 | 64 bytes from c3f0c7a6613c (172.21.0.2): icmp_seq=3 ttl=64 time=0.040 ms
https-1 |
https-1 | --- https ping statistics ---
https-1 | 3 packets transmitted, 3 received, 0% packet loss, time 2025ms
https-1 | rtt min/avg/max/mdev = 0.040/0.216/0.558/0.241 ms
https-1 | =====dig https output====
https-1 | 172.21.0.2
Failing 9:
% RL_VERSION=9 docker-compose up
[+] Running 1/1
✔ Container testhttps-https-1 Recreated 0.2s
Attaching to https-1
https-1 | Rocky Linux 9 - BaseOS 2.4 MB/s | 2.4 MB 00:00
...
https-1 | Complete!
https-1 | =====dig version output====
https-1 | DiG 9.16.23-RH
https-1 | =====ping https output====
https-1 | PING https (172.21.0.2) 56(84) bytes of data.
https-1 | 64 bytes from 4a2841b5dac9 (172.21.0.2): icmp_seq=1 ttl=64 time=0.404 ms
https-1 | 64 bytes from 4a2841b5dac9 (172.21.0.2): icmp_seq=2 ttl=64 time=0.117 ms
https-1 | 64 bytes from 4a2841b5dac9 (172.21.0.2): icmp_seq=3 ttl=64 time=0.088 ms
https-1 |
https-1 | --- https ping statistics ---
https-1 | 3 packets transmitted, 3 received, 0% packet loss, time 2009ms
https-1 | rtt min/avg/max/mdev = 0.088/0.203/0.404/0.142 ms
https-1 | =====dig https output====
https-1 | c.root-servers.net.
https-1 | l.root-servers.net.
https-1 | e.root-servers.net.
https-1 | d.root-servers.net.
https-1 | i.root-servers.net.
https-1 | b.root-servers.net.
https-1 | g.root-servers.net.
https-1 | m.root-servers.net.
https-1 | a.root-servers.net.
https-1 | f.root-servers.net.
https-1 | h.root-servers.net.
https-1 | j.root-servers.net.
https-1 | k.root-servers.net.
jamshid
(384 rep)
Oct 11, 2024, 05:05 PM
• Last activity: Oct 11, 2024, 07:45 PM
1
votes
2
answers
9684
views
Docker Permissions issue accessing `/dev/dri` Device
I have a permissions problem. I am running [Photoprism][1] inside a [Docker][2] container on Ubuntu 22.04. I want to use Intel QuickSync hardware transcoding. To do this, the app needs to access the `/dev/dri` device. I am attempting to get the app running without using `priviledged: true` in the do...
I have a permissions problem. I am running Photoprism inside a Docker container on Ubuntu 22.04. I want to use Intel QuickSync hardware transcoding. To do this, the app needs to access the
/dev/dri
device. I am attempting to get the app running without using priviledged: true
in the docker-compose file. It works (ie, the app is able to use /dev/dri
) when using priviledged: true
. When I remove priviledged: true
, the app reports this, even though devices:\n - /dev/dri
is in the docker-compose.yml
file:
$ docker-compose up -d
...
⠿ Container photoprism-photoprism-1 Starting 1.9s
Error response from daemon: error gathering device information while adding custom device "/dev/dri": no such file or directory
Currently I have Plex installed natively (ie, not using Docker), and it works with /dev/dri
just fine.
Here are the permissions on /dev/dri
:
$ ls -al /dev/dri
total 0
drwxr-xr-x 3 root root 100 May 29 14:09 .
drwxr-xr-x 19 root root 5200 May 29 14:09 ..
drwxr-xr-x 2 root root 80 May 29 14:09 by-path
crw-rw----+ 1 root render 226, 0 May 29 14:09 card0
crw-rw----+ 1 root render 226, 128 May 29 14:09 renderD128
(render
is a group name, but what is the meaning of 226,
in the listing output?)
Here are the details of the plex
user (which is the user which is running plexmediaserver
, which is working with /dev/dri
):
$ id plex
uid=998(plex) gid=998(plex) groups=998(plex),44(video),109(render)
...and the user which is running docker-compose up -d
$ id myuser
uid=1000(myuser) gid=1000(myuser) groups=1000(myuser),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),120(lpadmin),131(lxd),132(sambashare)
I thought that since the plex
user is working fine with /dev/dri
, maybe I could use the plex
user with Photoprism as well. But I was unable to get it working:
# inside docker compose
user: "998:998"
The relevant parts of the docker-compose.yml
file:
services:
photoprism:
image: photoprism/photoprism:latest
...
## Start as non-root user before initialization (supported: 0, 33, 50-99, 500-600, and 900-1200):
user: "998:998"
## Share hardware devices with FFmpeg and TensorFlow (optional):
devices:
- "/dev/dri:/dev/dri" # Intel QSV
...
TL;DR:
Works when I use docker privileged mode, doesn't work otherwise. Also, I could use some help understanding ls
output which mysteriously includes an unexplained 226,
in it (See above ls
output).
Eddified
(111 rep)
May 29, 2023, 11:37 PM
• Last activity: Oct 7, 2024, 10:34 AM
-1
votes
1
answers
367
views
syslog logging driver giving the error protocol wrong type for socket
I have a service defined via docker compose (see definition below). When I tried to start this service via docker-compose -f up --wait -d my_service, I get the error ``` Error response from daemon: failed to create task for container: failed to initialize logging driver: dial unix /dev/log: connect:...
I have a service defined via docker compose (see definition below). When I tried to start this service via docker-compose -f up --wait -d my_service, I get the error
Error response from daemon: failed to create task for container: failed to initialize logging driver: dial unix /dev/log: connect: protocol wrong type for socket
On my host server where I'm executing the docker compose cmd, I see the socket exists and my user has write perms:
srw-rw-rw-. 1 root root 0 Aug 29 2023 /dev/log
service definition:
my_service:
command:
image:
volumes:
- "/dev/log:/dev/log"
logging:
driver: "syslog"
options:
syslog-address: "unix:///dev/log"
tag: "my_service"
Does anyone know what could be causing this error?
atl123
(3 rep)
Sep 20, 2024, 08:50 PM
• Last activity: Sep 21, 2024, 02:06 AM
0
votes
0
answers
545
views
Docker Compose not synchronising file changes in volume
Reposting from [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177) as I don't quite understand how the "solution" works. **Symptom:** As reported [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177): I mount m...
Reposting from [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177) as I don't quite understand how the "solution" works.
**Symptom:**
As reported [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177) :
I mount my local files into the container for development. My
docker-compose.yml
file is as follows:
version: '3.7'
services:
node:
build: .
command: npm run dev
image: node
ports:
- "4000:3000"
volumes:
- .:/code
working_dir: /code
when I run the Next JS server in a container, it will initially load fine, but any changes made afterwards will not be shown.
This is same as my observation as well.
**Answers:**
There has been discussion on the issues of running docker in Windows 10 Pro, but I'm hosting in Linux for Linux, and the "solution" mentioned nothing about Windows either.
**Solution:**
The reported working "solution" is:
version: '3.7'
services:
node:
build: .
command: npm run dev
image: node
ports:
- "4000:3000"
volumes:
- type: bind
source: .
target: /code
working_dir: /code
This, to my knowledge, would be the same as what used by OP, i.e.:
volumes:
- .:/code
I.e., I think the question of why _"docker Compose not synchronising file changes in volume"_ was not clearly answered. And neither of the following pages:
- https://stackoverflow.com/questions/44678042/named-docker-volume-not-updating-using-docker-compose
- https://stackoverflow.com/questions/44251094/i-want-to-share-code-content-across-several-containers-using-docker-compose-volu/44265470
- https://stackoverflow.com/questions/42958573/docker-compose-recommended-way-to-use-data-containers
My case:
I have a php site hosted on nginx:
version: '3.7'
services:
web:
image: nginx:latest
ports:
- "80:8081"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:delegated
- ./app:/app:delegated
php:
build:
context: .
dockerfile: PHP.Dockerfile
volumes:
- ./app:/app:delegated
restart: always
This is my observation:
- I do docker compose up
- Then ^-C to stop it
- Update code in host,
- Re- do / start docker compose up
- Check the modified volume-mounted file within docker compose container,
- The file stays the same, as before the change.
xpt
(1858 rep)
Aug 15, 2024, 02:01 PM
• Last activity: Aug 15, 2024, 02:06 PM
Showing page 1 of 20 total questions