Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
1 answers
184 views
Where is the Systemd ordering cycle occurring?
I have to Systemd units a service and a socket. The desired behavior is that when I start the socket the service will start first followed by the socket and when either is taken down they should both go down. The service binds to the socket before the socket unit starts which is why I have it as a n...
I have to Systemd units a service and a socket. The desired behavior is that when I start the socket the service will start first followed by the socket and when either is taken down they should both go down. The service binds to the socket before the socket unit starts which is why I have it as a notify service so it doesn't report as started until it has the bind on the socket. Here are the files for the two Units.
#rustyvxcan.service
[Unit]
Description=Docker VXCAN plugin Service
#PartOf=rustyvxcan.socket
    
Before=docker.service
After=network.target
    
[Service]
Type=notify
ExecStartPre=/usr/bin/mkdir -p /run/docker/plugins
ExecStart=/home/braedon/.cargo/bin/rustycan4docker
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
#rustyvxcan.socket
[Unit]
Description=A network plugin for vxcan
Before=docker.service
AssertPathExists=/run/docker/plugins
Requires=rustyvxcan.service
After=rustyvxcan.service

[Socket]
ListenStream=/run/docker/plugins/rustyvxcan.sock
RemoveOnStop=True

[Install]
WantedBy=sockets.target
The error message is as follows.
Sep 19 16:50:38 localhost systemd: rustyvxcan.service: Found ordering cycle on rustyvxcan.socket/start
Sep 19 16:50:38 localhost systemd: rustyvxcan.service: Found dependency on rustyvxcan.service/start
Sep 19 16:50:38 localhost systemd: rustyvxcan.service: Unable to break cycle starting with rustyvxcan.service/start
Sep 19 16:55:09 localhost systemd: rustyvxcan.socket: Found ordering cycle on rustyvxcan.service/start
Sep 19 16:55:09 localhost systemd: rustyvxcan.socket: Found dependency on rustyvxcan.socket/start
Sep 19 16:55:09 localhost systemd: rustyvxcan.socket: Unable to break cycle starting with rustyvxcan.socket/start
From the resources I seem it means the cycle is only happening with these two files and I can clear the error by removing the After line from the .socket file but then they boot simultaneously and the executable for the service fails. I could probably resolve this simply by adding a sleep to the socket however I don't see enough dependency information here to cause a cycle. I have tried clearing out /{lib,etc}/systemd/system of the service and socket file and reinstalling the ones pasted here just to make sure older versions weren't running, and of course running systemctl daemon-reload regularly while modifying it. But I cant seem to figure out why specifying after breaks the dependency graph even after stripping out dependencies from the service file. Where is the loop happening that I am missing?
Braedon (13 rep)
Sep 20, 2024, 12:17 AM • Last activity: Sep 22, 2024, 08:45 AM
0 votes
2 answers
332 views
First ssh-agent request fails on WSL with systemd socket-activation
Running Fedora on WSL2, I find that the socket activation on `ssh-agent` doesn't quite work properly: the first request that triggers the actual service starting fails. This may be a `git fetch` or `git pull` request, or else an `ssh-add` call. This shows up as a long timeout on the client call rath...
Running Fedora on WSL2, I find that the socket activation on ssh-agent doesn't quite work properly: the first request that triggers the actual service starting fails. This may be a git fetch or git pull request, or else an ssh-add call. This shows up as a long timeout on the client call rather than as an immediate failure. Because the systemd config contains both ssh-agent.socket *and* ssh-agent.service, attempting to disable ssh-agent.socket and enable ssh-agent.service directly doesn't work, as it just turns the socket activation back on rather than configuring the service to start automatically:
~$ systemctl --user is-enabled ssh-agent.socket
enabled
~$ systemctl --user is-enabled ssh-agent.service
indirect
~$ systemctl --user enable ssh-agent.service
~$ systemctl --user is-enabled ssh-agent.service
indirect
~$ systemctl --user disable ssh-agent.socket
Removed "/home/acoghlan/.config/systemd/user/sockets.target.wants/ssh-agent.socket".
~$ systemctl --user enable ssh-agent.service
Created symlink /home/acoghlan/.config/systemd/user/sockets.target.wants/ssh-agent.socket → /usr/lib/systemd/user/ssh-agent.socket.
ncoghlan (1071 rep)
Aug 29, 2024, 04:24 AM • Last activity: Aug 29, 2024, 06:58 AM
0 votes
1 answers
94 views
Automatically turn on and off rarely used services
I have here a minimal-memory, lowest-budget VM setting, where I would like to use a rarely used big service. I think, things would look really much better, if I could simply turn it off if I do not need it (about 99% of the work day). Fortunately, the tool starts relatively quickly. It is being acce...
I have here a minimal-memory, lowest-budget VM setting, where I would like to use a rarely used big service. I think, things would look really much better, if I could simply turn it off if I do not need it (about 99% of the work day). Fortunately, the tool starts relatively quickly. It is being accessed over a tcp socket. I think, I would put some socket before it, which could auto-start it if there is a request. It could also auto-stop it if there is no access for a while (some minutes or so). Can I somehow do it, maybe with some tricky systemd socket configuration?
peterh (10448 rep)
Aug 24, 2024, 09:11 PM • Last activity: Aug 24, 2024, 10:06 PM
0 votes
1 answers
1259 views
Simple systemd socket activation from HTTP request
I want to use systemd socket activation to start a service whenever an HTTP request is received on the listening socket. The incoming message is not important and can be discarded by systemd or the service it starts. I have been able to do this, but not without the HTTP sender crashing with errors o...
I want to use systemd socket activation to start a service whenever an HTTP request is received on the listening socket. The incoming message is not important and can be discarded by systemd or the service it starts. I have been able to do this, but not without the HTTP sender crashing with errors of Connection reset by peer. I would like to satisfy the sending HTTP client, closing sockets or HTTP exchanges properly. First, I have a question of whether to use Accept=yes or no in the .socket file. For example: echo.socket:
[Unit]
Description=Example echo socket

[Socket]
ListenStream=127.0.0.1:33300
Accept=yes

[Install]
WantedBy=sockets.target
In man systemd.socket, it is suggested that > For performance reasons, it is recommended to write new daemons only in a way that is suitable for Accept=no. However, I guess the name Accept= implies whether or not the incoming connection on the socket has been accepted (I guess calling accept() at the kernel level), because with Accept=no the burden is on the started service to accept and handle connections on the socket. See, for example, [this question](https://unix.stackexchange.com/questions/573767/systemd-socket-based-activation-service-fails-due-to-start-request-repeated-too) and its answer. I would really like it if I could follow the man page's recommendation and use Accept=no, but the answer in the aformentioned question implies I would need to invoke some kind of TCP server, which is an additional learning curve and layer of complexity I might like to avoid if possible. (Any answers along these lines would certainly be acceptable though.) So, assume then I use Accept=yes, and I also have echo@.service:
[Unit]
Description=echo service

[Service]
ExecStart=sh -c 'echo -e "HTTP/1.1 200 OK\nContent-Type: text/plain; charset=utf-8\nConnection: close\n\nOK"'
StandardInput=socket
StandardOutput=socket
Sending HTTP requests works to activate the service, but the client doesn't like it:
$ curl 'http://127.0.0.1:33300 '
OK
curl: (56) Recv failure: Connection reset by peer

$ journalctl --user -e
Jun 13 14:02:59 myhost systemd: Started echo service (127.0.0.1:39908).
(Try also, for example, python -c "import requests; r = requests.get('http://127.0.0.1:33300 '); print(r.status_code)".) A simple socat without any HTTP seems to exit without issue though.
$ socat - TCP:127.0.0.1:33300
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Connection: close

OK
I don't know what the issue is here, if systemd closes the connection too abruptly for the HTTP exchange, if the HTTP exchange is missing some extra closing message, or something else. And, as mentioned above, I'm still curious about Accept=no solutions too.
tsj (207 rep)
Jun 13, 2024, 06:16 PM • Last activity: Jun 14, 2024, 03:31 PM
0 votes
0 answers
81 views
Why getting access denied when starting a port
I have a RHEL 8 Linux and using db2 when starting a services of db2 (`hadr`) we get in the diag an error message like below. The command to start the `hadr` service does not return an error but the remote machine can not connect to this port either. The FW and SELinux have been disabled. The system...
I have a RHEL 8 Linux and using db2 when starting a services of db2 (hadr) we get in the diag an error message like below. The command to start the hadr service does not return an error but the remote machine can not connect to this port either. The FW and SELinux have been disabled. The system person does not have an idea about it. Any help is welcome. 2023-12-15-08.07.33.324997+060 E46646E616 LEVEL: Error (OS) PID : 10565 TID : 140583339288320 PROC : db2sysc 0 INSTANCE: dbadma NODE : 000 DB : ACC1 HOSTNAME: uhdb2accdr.uhasselt.be EDUID : 95 EDUNAME: db2hadrs.0.0 (ACC1) 0 FUNCTION: DB2 UDB, oper system services, sqloPdbQuerySocketErrorStatus, probe:15 MESSAGE : ZRC=0x840F0001=-2079391743=SQLO_ACCD "Access Denied" DIA8701C Access denied for resource "", operating system return code was "". CALLED : OS, -, getsockopt OSERR: EACCES (13) A suggestion was to strace the startup, but this start command ends successfully but calls a background/asynch process to listen on the port. The strace ends when the start command returns the shell command line. Thanks for all help/info Best regards, Guy
Guy Przytula (1 rep)
Dec 19, 2023, 07:02 AM • Last activity: Dec 20, 2023, 12:59 PM
7 votes
4 answers
7176 views
How do I make systemd sockets close when service is stopped?
I'm currently trying to make a systemd service with two Fifo sockets. These sockets map to stdout and stdin of the application. I'm currently using the following configuration files. *foo.service* ``` [Unit] Description=foo Fifo test After=network.target foo-in.socket foo-out.socket Requires=foo-in....
I'm currently trying to make a systemd service with two Fifo sockets. These sockets map to stdout and stdin of the application. I'm currently using the following configuration files. *foo.service*
[Unit]
Description=foo Fifo test
After=network.target foo-in.socket foo-out.socket
Requires=foo-in.socket foo-out.socket

[Service]
Sockets=foo-out.socket
Sockets=foo-in.socket
StandardOutput=fd:foo-out.socket
StandardInput=fd:foo-in.socket
StandardError=journal
ExecStart=/path/to/foo/exec
*foo-out.socket*
[Unit]
Description=foo Task Writes to this

[Socket]
Service=foo.service
ListenFIFO=%t/foo/out
*foo-in.socket*
[Unit]
Description=foo Task reads commands from this

[Socket]
Service=foo.service
ListenFIFO=/run/user/1000/foo/in
I can start the service using the commands systemctl --user daemon-reload and systemctl --user start foo. The problem comes when I try stopping foo.service. I receive this message:
Warning: Stopping foo.service, but it can still be activated by:
  foo-in.socket
  foo-out.socket
Is there a way to stop the sockets automatically when the service is stopped?
Rene (171 rep)
Feb 5, 2019, 08:06 PM • Last activity: Oct 14, 2023, 06:02 PM
0 votes
2 answers
546 views
podman Error: wrong number of file descriptors for socket activation protocol (2 != 1)
I'm encountering what seems like a bug with the socket activation mechanism for podman, though I'm not sure if the issue is podman or systemd. I created an alternative managed socket unit for the podman service in order to expose the standard `/run/docker.socket` path expected by default by docker t...
I'm encountering what seems like a bug with the socket activation mechanism for podman, though I'm not sure if the issue is podman or systemd. I created an alternative managed socket unit for the podman service in order to expose the standard /run/docker.socket path expected by default by docker tooling:
# systemctl cat docker.socket
# /etc/systemd/system/docker.socket
[Unit]
Description=Docker API Socket
Documentation=man:podman-system-service(1)

[Socket]
ListenStream=%t/docker.sock
SocketMode=0660
Service=podman.service

[Install]
WantedBy=sockets.target

[Socket]
SocketGroup=wheel
Basically the same thing as the default podman.socket unit. Now I'm not sure if having multiple socket activating the same service is problematic, didn't seem to be the case up until now, but assuming the default podman.socket unit is properly disabled. Now if I try and connect to the socket(e.g. nc -D -U /run/docker.sock), thus activating the podman service, podman gets thrown into a failure loop:
Mar 10 14:38:17 drpyser-workstation podman: time="2023-03-10T14:38:17-05:00" level=info msg="/usr/bin/podman filtering at log level info"
Mar 10 14:38:17 drpyser-workstation podman: time="2023-03-10T14:38:17-05:00" level=info msg="Setting parallel job count to 49"
Mar 10 14:38:17 drpyser-workstation podman: time="2023-03-10T14:38:17-05:00" level=info msg="Using systemd socket activation to determine API endpoint"
Mar 10 14:38:17 drpyser-workstation podman: Error: wrong number of file descriptors for socket activation protocol (2 != 1)
Mar 10 14:38:17 drpyser-workstation systemd: podman.service: Main process exited, code=exited, status=125/n/a
Mar 10 14:38:17 drpyser-workstation systemd: podman.service: Failed with result 'exit-code'
(this repeats for a while until it tires out) I believe I can observe the condition podman is complaining about by looking at listeners on /run/docker.sock when I activate it. Before I activate the socket, lsof /run/docker.sock shows
COMMAND PID USER   FD   TYPE             DEVICE SIZE/OFF   NODE
 NAME
systemd   1 root   47u  unix 0x00000000bad2c1a8      0t0 776246
 /run/docker.sock type=STREAM (LISTEN)
So far so good, systemd is doing its job of listening on the socket waiting for incoming connections to pass on to podman. When I activate the socket:
COMMAND PID USER   FD   TYPE             DEVICE SIZE/OFF   NODE
 NAME
systemd   1 root   47u  unix 0x00000000bad2c1a8      0t0 776246
 /run/docker.sock type=STREAM (LISTEN)
systemd   1 root   49u  unix 0x00000000dec938bb      0t0 802883
 /run/docker.sock type=STREAM (LISTEN)
Now is this behavior normal? Is this systemd spawning a new file descriptor on the socket to pass on to podman, while still listening to incoming connections, in which case podman has no business complaining, and I should file a bug report to the podman team? Thanks. EDIT: actually, there seems to be some weird circular dependency going on because of my two socket units. docker.socket won't work if I mask podman.socket:
Mar 10 16:03:49 drpyser-workstation systemd: docker.socket: Failed to queue service startup job (Maybe the service file is missing or not a non-template unit?): Unit podman.socket is masked.
Mar 10 16:03:49 drpyser-workstation systemd: docker.socket: Failed with result 'resources'.
I'm failing to find a way to cut the dependency between podman.service and podman.socket:
podman.service
× ├─docker.socket
○ ├─podman.socket
● ├─system.slice
● └─sysinit.target
●   [...]
Despite playing around with overrides:
# /usr/lib/systemd/system/podman.service
[Unit]
Description=Podman API Service
Requires=podman.socket
After=podman.socket
Documentation=man:podman-system-service(1)
StartLimitIntervalSec=0

[Service]
Delegate=true
Type=exec
KillMode=process
Environment=LOGGING="--log-level=info"
ExecStart=/usr/bin/podman $LOGGING system service

[Install]
WantedBy=default.target

# /etc/systemd/system/podman.service.d/override.conf
[Unit]
Requires=
After=
Requires=docker.socket
After=docker.socket
Is there a way to make systemd understand I want to cut podman.socket out?
Charles Langlois (201 rep)
Mar 13, 2023, 01:48 PM • Last activity: Jul 2, 2023, 05:55 AM
1 votes
0 answers
245 views
How to get different UID using Dynamic Users with socket activation?
I'm following the [Dynamic Users with systemd](https://0pointer.net/blog/dynamic-users-with-systemd.html) post and creating the `waldo.socket` and `waldo.service`. Here is my `waldo.socket`. [Socket] ListenStream=2048 Accept=yes And the corresponding `waldo@.service` [Service] ExecStart=-sleep 300 D...
I'm following the [Dynamic Users with systemd](https://0pointer.net/blog/dynamic-users-with-systemd.html) post and creating the waldo.socket and waldo.service. Here is my waldo.socket. [Socket] ListenStream=2048 Accept=yes And the corresponding waldo@.service [Service] ExecStart=-sleep 300 DynamicUser=yes It works nicely, but I discovered that all sleep 300 are launched with the same UID. $ ps fax -o uid,pid,cmd | grep sleep 61647 87279 sleep 300 61647 87282 sleep 300 61647 87285 sleep 300 I'd like to have each instance of the service using a distinct UID, as is implied in that article > By combining dynamic user IDs with socket activation you may easily implement a system where each incoming connection is served by a process instance running as a different, fresh, newly allocated UID within its own sandbox. **What am I doing wrong ?**
Steve Schnepp (111 rep)
Jun 2, 2023, 01:44 PM
25 votes
4 answers
6750 views
On-demand SSH Socks proxy through systemd user units with socket-activation doesn't restart as wished
To reach an isolated network I use an [tag:SSH] `-D` [tag:socks] [tag:proxy]. In order to avoid having to type the details every time I added them to `~/.ssh/config`: $ awk '/Host socks-proxy/' RS= ~/.ssh/config Host socks-proxy Hostname pcit BatchMode yes RequestTTY no Compression yes DynamicForwar...
To reach an isolated network I use an [tag:SSH] -D [tag:socks] [tag:proxy]. In order to avoid having to type the details every time I added them to ~/.ssh/config: $ awk '/Host socks-proxy/' RS= ~/.ssh/config Host socks-proxy Hostname pcit BatchMode yes RequestTTY no Compression yes DynamicForward localhost:9118 Then I created a [tag:systemd-user] service unit definition file: $ cat ~/.config/systemd/user/SocksProxy.service [Unit] Description=SocksProxy Over Bridge Host [Service] ExecStart=/usr/bin/ssh -Nk socks-proxy [Install] WantedBy=default.target I let the daemon reload the new service definitions, enabled the new service, started it, checked its status, and verified, that it is listening: $ systemctl --user daemon-reload $ systemctl --user list-unit-files | grep SocksP SocksProxy.service disabled $ systemctl --user enable SocksProxy.service Created symlink from ~/.config/systemd/user/default.target.wants/SocksProxy.service to ~/.config/systemd/user/SocksProxy.service. $ systemctl --user start SocksProxy.service $ systemctl --user status SocksProxy.service ● SocksProxy.service - SocksProxy Over Bridge Host Loaded: loaded (/home/alex/.config/systemd/user/SocksProxy.service; enabled) Active: active (running) since Thu 2017-08-03 10:45:29 CEST; 2s ago Main PID: 26490 (ssh) CGroup: /user.slice/user-1000.slice/user@1000.service/SocksProxy.service └─26490 /usr/bin/ssh -Nk socks-proxy $ netstat -tnlp | grep 118 tcp 0 0 127.0.0.1:9118 0.0.0.0:* LISTEN tcp6 0 0 ::1:9118 :::* LISTEN This works as intended. Then I wanted to avoid having to manually start the service, or running it permanently with [tag:autossh], by using [tag:systemd] [tag:socket-activation] for on-demand (re-)spawning. That didn't work, I think (my version of) ssh cannot receive socket file-descriptors. I found the documentation ((http://0pointer.de/blog/projects/socket-activation.html),(http://0pointer.de/blog/projects/socket-activation2.html)) , and [an example](https://unix.stackexchange.com/questions/352495/systemd-on-demand-start-of-services-like-postgresql-and-mysql-that-do-not-yet-s) for using the [systemd-socket-proxyd](https://www.freedesktop.org/software/systemd/man/systemd-socket-proxyd.html)-tool to create 2 "wrapper" services, a "service" and a "socket": $ cat ~/.config/systemd/user/SocksProxyHelper.socket [Unit] Description=On Demand Socks proxy into Work [Socket] ListenStream=8118 #BindToDevice=lo #Accept=yes [Install] WantedBy=sockets.target $ cat ~/.config/systemd/user/SocksProxyHelper.service [Unit] Description=On demand Work Socks tunnel After=network.target SocksProxyHelper.socket Requires=SocksProxyHelper.socket SocksProxy.service After=SocksProxy.service [Service] #Type=simple #Accept=false ExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:9118 TimeoutStopSec=5 [Install] WantedBy=multi-user.target $ systemctl --user daemon-reload This *seems* to work, until ssh dies or gets killed. Then it won't re-spawn at the next connection attempt when it should. ### Questions: 1. Can /usr/bin/ssh really not accept systemd-passed sockets? Or only newer versions? Mine is the [one from up2date Debian 8.9](https://packages.debian.org/jessie/openssh-client) . 2. Can only units of root use the BindTodevice option? 3. Why is my proxy service not respawning correctly on first new connection after the old tunnel dies? 4. Is this the right way to set-up an "on-demand ssh socks proxy"? If, not, how do you do it?
Alex Stragies (6144 rep)
Aug 3, 2017, 12:38 PM • Last activity: Feb 18, 2023, 06:44 PM
0 votes
1 answers
316 views
How to Activate a Mount of a Remote Share When Its Machine Connects?
How can one activate a mount of a remote SMB share when the remote machine connects? This is more about discerning a local event triggered by the connection of a particular remote machine, than it is about the action taken on that event. What can be determined is the port and protocol, of course, pr...
How can one activate a mount of a remote SMB share when the remote machine connects? This is more about discerning a local event triggered by the connection of a particular remote machine, than it is about the action taken on that event. What can be determined is the port and protocol, of course, probably the source IP, and perhaps its MAC. To illustrate, imagine two Windows laptops named Blue and Green, each with a share named Data, that occasionally connect to a Linux Samba server named Martini. The objective is for Martini to mount \\Blue\Data to /srv/blue (or wherever)(and do other things) when Blue connects, and mount \\Green\Data to /srv/green (or wherever)(and do other things) when Green connects. Perhaps I'm too deep in the weeds but this seems harder than it looks. It's straightforward to mount a remote share when localhost connects to it, *e.g.*, when Martini boots, does its thing, finds Blue and Green running, and mounts their shares. I even have figured out how to activate a host mount of a share on a virtual machine when it fires up (create a systemd.path unit that monitors the VM's log file, then x-systemd.requires=foo.path in fstab). For a fully remote machine, however, I'm drawing a blank. There is a roundabout / Rube Goldberg way via the iptables LOG target and rsyslog (directly or via a systemd.path unit) but that has too many moving pieces and seems like a kludge. The hope is that something more direct exists. Socket activation can mind a port but (and I easily could be wrong) isn't obviously capable of discerning the connecting machine. Udev activation seems focused only on localhost's hardware. I haven't figured out a client-wise /dev, /proc, or other path to inspect, although I easily could have missed something. Perhaps there is something in /etc/samba/smb.conf. Pending further tail-chasing, I thought I'd post to see what ideas the community might have. Any input would be most appreciated.
ebsf (399 rep)
May 17, 2022, 04:43 AM • Last activity: May 18, 2022, 05:12 PM
15 votes
3 answers
5339 views
Deactivate a systemd service after idle time
I want a service to start on demand rather than on boot. To do that I could use systemd socket activation (with the service and socket files). But this is a resource limited server, so after some time (e.g. 1 hour) of inactivity, I want to stop the service (until it is triggered again). How can I do...
I want a service to start on demand rather than on boot. To do that I could use systemd socket activation (with the service and socket files). But this is a resource limited server, so after some time (e.g. 1 hour) of inactivity, I want to stop the service (until it is triggered again). How can I do that? I looked through some of [the documentation](https://www.freedesktop.org/software/systemd/man/systemd.socket.html) but I can't figure out if this is supported. --- Update: Assuming this is unsupported, the use case is still probably quite common. What would be a good way / workaround to achieve this?
lonix (1965 rep)
Sep 23, 2018, 02:23 PM • Last activity: May 3, 2022, 10:21 AM
0 votes
1 answers
770 views
How to restrict the access/activation times for a service/socket with systemd?
I have a simple systemd service that is activated by system socket. It's as simple as that (a little simplified): ```console $ systemctl cat example.socket # /usr/lib/systemd/system/example.socket [Unit] Description=Example Server socket [Socket] ListenStream=80 Accept=true [Install] WantedBy=socket...
I have a simple systemd service that is activated by system socket. It's as simple as that (a little simplified):
$ systemctl cat example.socket 
# /usr/lib/systemd/system/example.socket
[Unit]
Description=Example Server socket

[Socket]
ListenStream=80
Accept=true

[Install]
WantedBy=sockets.target
$ systemctl cat example@.service 
# /usr/lib/systemd/system/example@.service
[Unit]
Description=Example Server

[Service]
StandardInput=socket
StandardOutput=socket
StandardError=journal
ExecStart=/usr/libexec/example
User=example
Now what I want is to implement a basic **access restriction** by time. I.e. I want to limit the time a day the socket/service can be activated/reached from the outside, so it's only available at certain times a day, e.g. I know I can use systemctl edit to override the options, but I did not found an option to set, actually. [I looked through the man page regarding system sockets](https://manpages.debian.org/stretch/systemd/systemd.socket.5.en.html) and the only options regarding times are TriggerLimitIntervalSec or so, which do not do what I want. To compare this, the little oldish tool xinetdx, which can do the same i.e. listen on a socket and start a process (server) on demand [has an option](https://www-uxsup.csx.cam.ac.uk/pub/doc/redhat/redhat8/rhl-rg-en-8.0/s1-tcpwrappers-xinetd.html) called access_times, which can be used to specify when a service should be available. But using this as another tool (/dependency) is not a thing I'd like. I'd aim for an integrated way into systemd.
rugk (3496 rep)
May 14, 2021, 03:54 PM • Last activity: May 19, 2021, 12:33 PM
2 votes
0 answers
749 views
Can systemd be used as an inetd/xinetd replacement without being an init?
Can systemd be configured (in runtime or compile time) to serve as a simple process supervisor, not as `/sbin/init`? If yes, are there tutorials and other documentation to follow to make customized non-init-systemd setups? If no, what other thing should be used for UNIX-socket-activated services apa...
Can systemd be configured (in runtime or compile time) to serve as a simple process supervisor, not as /sbin/init? If yes, are there tutorials and other documentation to follow to make customized non-init-systemd setups? If no, what other thing should be used for UNIX-socket-activated services apart from plain openbsd-inetd (which, for example, cannot chown/chmod the sockets)?
Vi. (5985 rep)
Sep 24, 2020, 11:50 PM • Last activity: Sep 25, 2020, 01:55 AM
0 votes
2 answers
1364 views
Manually stopped service is not properly stopped
I have a Server, that is supposed to activate if someone tries to connect to it. For this I created a systemd socket and service that builds a proxy for my server and starts it. Thanks to [this Tutorial][1] it wasn't too hard. I made a FIFO-Pipe to communicate with the running server and if nobody i...
I have a Server, that is supposed to activate if someone tries to connect to it. For this I created a systemd socket and service that builds a proxy for my server and starts it. Thanks to this Tutorial it wasn't too hard. I made a FIFO-Pipe to communicate with the running server and if nobody is active on it I want the server to stop. If I stop the Server through the pipe the Server.service stays in a loaded deactivating stop state while the proxy.service stays in the running. (state from systemctl list-units) I want the Service to restart again if someone tries to connect again, but this only works if I manually systemctl stop server.service. ---------- proxy.socket [Socket] ListenStream=25565 [Install] WantedBy=sockets.target proxy.server [Unit] Requires=server.service After=server.service [Service] ExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:25555 server.service [Unit] Description=my Server [Service] User=nonRootUser ExecStart=/home/nonRootUser/server/startup-fifo.sh ExecStop=/home/nonRootUser/server/cmd stop
Rennsemmel01 (1 rep)
Feb 29, 2020, 06:40 PM • Last activity: Aug 14, 2020, 04:44 AM
1 votes
1 answers
2095 views
Systemd socket-based-activation service fails due to "start request repeated too quickly"
I have the following units: test_socket_activation.socket [Unit] Description="************** MY TEST SOCKET ***************" PartOf=test_socket_activation.service [Socket] ListenStream=127.0.0.1:9991 [Install] WantedBy=sockets.target test_socket_activation.service [Unit] Description="********** MY T...
I have the following units: test_socket_activation.socket [Unit] Description="************** MY TEST SOCKET ***************" PartOf=test_socket_activation.service [Socket] ListenStream=127.0.0.1:9991 [Install] WantedBy=sockets.target test_socket_activation.service [Unit] Description="********** MY TEST SERVICE *****************" [Service] ExecStart=/home/xxx/sysadmin/systemd_units/socket_based_activation/testservice.sh [Install] WantedBy=multi-user.target testservice.sh #!/bin/bash echo "Socket Service Triggered" > output.txt In theory systemd should be listening on 127.0.0.1 port 9991 (via test_scocket_activation.socket). When the socket is accessed, systemd should call the parent unit (test_scocket_activation.service), which in turn should execute the script listed in the ExecStart directive (testservice.sh) which would then create a text file named output.txt to notify that the socket has been accessed. This works as expected (i.e output.txt is created when the socket is accessed) except that the service unit (test_scocket_activation.service) fails afterwards due to "start request repeated too quickly, refusing to start" even though the socket is only accessed once. The failure of the service then triggers a failure of the associated socket, which stops listening. I did some testing here are my steps and log outputs: ❯ sudo systemctl start test_socket_activation.socket ❯ sudo systemctl status test_socket_activation.socket ● test_socket_activation.socket - "************** MY TEST SOCKET ***************" Loaded: loaded (/etc/systemd/system/test_socket_activation.socket; disabled) Active: active (listening) since Thu 2020-03-19 14:08:40 +01; 17s ago Listen: 127.0.0.1:9991 (Stream) Mar 19 14:08:40 toshi systemd: Starting "************** MY TEST SOCKET ***************". Mar 19 14:08:40 toshi systemd: Listening on "************** MY TEST SOCKET ***************". ❯ echo "hello" | netcat 127.0.0.1 9991 ❯ ls output.txt testservice.sh ❯ sudo systemctl status test_socket_activation.service ● test_socket_activation.service - "********** MY TEST SERVICE *****************" Loaded: loaded (/etc/systemd/system/test_socket_activation.service; disabled) Active: failed (Result: start-limit) since Thu 2020-03-19 14:10:42 +01; 9min ago Process: 2842 ExecStart=/home/mkr/sysadmin/systemd_units/socket_based_activation/testservice.sh (code=exited, status=0/SUCCESS) Main PID: 2842 (code=exited, status=0/SUCCESS) Mar 19 14:10:42 toshi systemd: Started "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: test_socket_activation.service start request repeated too quickly, refusing to start. Mar 19 14:10:42 toshi systemd: Failed to start "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Unit test_socket_activation.service entered failed state. ❯ sudo systemctl status test_socket_activation.socket ● test_socket_activation.socket - "************** MY TEST SOCKET ***************" Loaded: loaded (/etc/systemd/system/test_socket_activation.socket; disabled) Active: failed (Result: service-failed-permanent) since Thu 2020-03-19 14:10:42 +01; 41s ago Listen: 127.0.0.1:9991 (Stream) Mar 19 14:08:40 toshi systemd: Starting "************** MY TEST SOCKET ***************". Mar 19 14:08:40 toshi systemd: Listening on "************** MY TEST SOCKET ***************". Mar 19 14:10:42 toshi systemd: Unit test_socket_activation.socket entered failed state. ❯ sudo journalctl -u test_socket_activation.service -- Logs begin at Thu 2020-03-19 14:05:23 +01, end at Thu 2020-03-19 14:17:29 +01. -- Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: Started "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: Started "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: Started "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: Started "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: Started "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Starting "********** MY TEST SERVICE *****************"... Mar 19 14:10:42 toshi systemd: test_socket_activation.service start request repeated too quickly, refusing to start. Mar 19 14:10:42 toshi systemd: Failed to start "********** MY TEST SERVICE *****************". Mar 19 14:10:42 toshi systemd: Unit test_socket_activation.service entered failed state. As you can see above, the socket is accessed ONCE only, the service is triggered and it does call upon the script to create the output.txt file. But shortly after the service fails as it tries to start multiple times. My questions are as follows: 1. Why is the service starting more than once, when the socket is only accessed once (via echo "hello" | netcat 127.0.0.1 9991)? 2. How can I avoid having the service start multiple times when the associated socket is only accessed once? All answers / input / insights would be greatly appreciated, thanks.
Lawless Leopard (107 rep)
Mar 19, 2020, 01:46 PM • Last activity: Mar 19, 2020, 02:32 PM
1 votes
1 answers
658 views
How to specify the service template name when using socket activation with Accept=yes
I have multiple `.socket` files, they listen with `Accept=yes`. They should all use the same service template to process connections. By default systemd looks for a service template with the same name as the socket, but as I have multiple socket files I would like to all point them to the same servi...
I have multiple .socket files, they listen with Accept=yes. They should all use the same service template to process connections. By default systemd looks for a service template with the same name as the socket, but as I have multiple socket files I would like to all point them to the same service template. There is a setting Service= but that only accepts non-template services and requires Accept=no. Is there any way to specify the service template to invoke from the .socket unit?
JanKanis (1421 rep)
Feb 12, 2020, 12:10 PM • Last activity: Feb 12, 2020, 03:54 PM
2 votes
1 answers
2462 views
Systemd socket activation stdin
I have to tranfer a legacy xinetd config to systemd. The requirement is to open a tcp port and listen to incoming transmissions. An application transfers one file per connection simply by netcating it to the port. When a transmission is registered, a shell script is invoked, which saves the incoming...
I have to tranfer a legacy xinetd config to systemd. The requirement is to open a tcp port and listen to incoming transmissions. An application transfers one file per connection simply by netcating it to the port. When a transmission is registered, a shell script is invoked, which saves the incoming data to a file by redirecting the stdin into said file. This construct worked for years with xined. Here is what I have: [root@localhost ~]# cat /etc/systemd/system/foo.socket [Unit] Description=Foo Socket PartOf=foo.service [Socket] ListenStream=127.0.0.1:9999 Accept=yes [Install] WantedBy=sockets.target [root@localhost ~]# cat /etc/systemd/system/foo@.service [Unit] Description=Foo Service After=network.target foo.socket Requires=foo.socket [Service] Type=simple ExecStart=/usr/local/bin/foo.sh TimeoutStopSec=5 [Install] WantedBy=default.target [root@localhost ~]# cat /usr/local/bin/foo.sh #!/bin/bash cat > /tmp/foo.$$ [root@localhost ~]# systemctl start foo.socket [root@localhost ~]# echo "Hello World" > testfile [root@localhost ~]# socat -u FILE:testfile TCP:127.0.0.1:9999 [root@localhost ~]# ls -al /tmp/foo.* -rw-r--r--. 1 root root 0 Nov 7 21:20 /tmp/foo.19820 [root@localhost ~]# The tcp port is open, the service is invoked and the shell script is executed. But the output file size is zero. If I stop the socket and use this command: [root@localhost system]# systemctl stop foo.socket [root@localhost system]# /usr/lib/systemd/systemd-activate -l 127.0.0.1:9999 -a /usr/local/bin/foo.sh & 19833 [root@localhost system]# Listening on 127.0.0.1:9999 as 3. [root@localhost ~]# socat -u FILE:testfile TCP:127.0.0.1:9999 Communication attempt on fd 3. Connection from 127.0.0.1:39924 to 127.0.0.1:9999 Spawned /usr/local/bin/foo.sh (/usr/local/bin/foo.sh) as PID 19840 [root@localhost ~]# Child 19840 died with code 0 [root@localhost ~]# ls -al /tmp/foo* -rw-r--r--. 1 root root 12 Nov 7 21:26 /tmp/foo.19840 [root@localhost ~]# cat /tmp/foo.19840 Hello World [root@localhost ~]# It works like expected. What do I miss?
baneus (51 rep)
Nov 7, 2019, 08:32 PM • Last activity: Nov 8, 2019, 06:46 PM
3 votes
1 answers
866 views
Systemd socket activation: kill bash script when closing socket
Assuming a minimal example like in [this question](/questions/454731), except for another shell script. systemfoo@.service: ``` [Unit] Description=Foo Service After=network.target systemfoo.socket Requires=systemfoo.socket [Service] Type=oneshot ExecStart=/bin/bash /opt/foo/foo.sh TimeoutStopSec=5 [...
Assuming a minimal example like in [this question](/questions/454731), except for another shell script. systemfoo@.service:
[Unit]
Description=Foo Service
After=network.target systemfoo.socket
Requires=systemfoo.socket

[Service]
Type=oneshot
ExecStart=/bin/bash /opt/foo/foo.sh
TimeoutStopSec=5

[Install]
WantedBy=multi-user.target
systemfoo.socket
[Unit]
Description=Foo Socket
PartOf=systemfoo@.service

[Socket]
ListenStream=127.0.0.1:7780
Accept=Yes

[Install]
WantedBy=sockets.target
/opt/foo/foo.sh
#!/bin/bash

while true; do
    logger -t FOO "Connection received: $REMOTE_ADDR $REMOTE_PORT"
done
When I connect via
nc 127.0.0.1 7780
the script is invoked correctly. But when I quit nc with CTRL-C, the script runs forever. Is there a mechanism to send a SIGTERM to the script process, when closing the socket (I assume nc does that when quitting)?
gerion (133 rep)
Jun 4, 2019, 10:03 PM • Last activity: Jun 5, 2019, 06:46 AM
8 votes
2 answers
5015 views
systemd "socket activation" vs xinetd
I use `xinetd` and it works for my purposes. However I recently discovered that systemd has something built in called "socket activation". These two seem very similar, but systemd is "official" and seems like the better choice. *However before using it, are they really the same? Are there difference...
I use xinetd and it works for my purposes. However I recently discovered that systemd has something built in called "socket activation". These two seem very similar, but systemd is "official" and seems like the better choice. *However before using it, are they really the same? Are there differences I should be aware of?* For example, I want to start some dockerised services only when they are first requested - my first thought would be to use xinetd. But is socket activation better / faster / stabler / whatever?
lonix (1965 rep)
Sep 23, 2018, 01:39 PM • Last activity: Sep 23, 2018, 02:05 PM
3 votes
1 answers
675 views
Systemd socket activation proxy?
I have systemd setup, and it runs: - nginx.service on `:80` - wikiname.socket `:8080` - wikiname.service `:9094` So here is what I do... 1. I check if wikiname.service is running...and it is not. 1. I start the wikiname.socket unit. 1. I visit http://wikiserver:8080/ my browser does nothing... 1. I...
I have systemd setup, and it runs: - nginx.service on :80 - wikiname.socket :8080 - wikiname.service :9094 So here is what I do... 1. I check if wikiname.service is running...and it is not. 1. I start the wikiname.socket unit. 1. I visit http://wikiserver:8080/ my browser does nothing... 1. I check again to see if wikiname.service is running and now it is, it was started by my browser activating the socket! This is great and all but... How can I have the wikiname.socket on the same socket as wikiname.service? P.S. Before I found out about socket activation in systemd, I was starting the wikiname.service process manually and then when the correct path was called on nginx :80 the request would be proxied to the wikiname.service, but I'd rather use socket activation, so how do I keep both the wikiname.socket and wikiname.service on the same port?
leeand00 (4937 rep)
Nov 24, 2016, 08:25 PM • Last activity: Aug 16, 2017, 02:18 PM
Showing page 1 of 20 total questions