Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
1
answers
5308
views
How to list all object paths under a dbus service only usign dbus command-line utility?
How can I list the object path under a dbus services using ONLY `dbus-send` command line utility? For now, I can only list services: dbus-send --system --dest=org.freedesktop.DBus --type=method_call --print-reply /org/freedesktop/DBus org.freedesktop.DBus.ListNames or interfaces: dbus-send --system...
How can I list the object path under a dbus services using ONLY
dbus-send
command line utility?
For now, I can only list services:
dbus-send --system --dest=org.freedesktop.DBus --type=method_call --print-reply /org/freedesktop/DBus org.freedesktop.DBus.ListNames
or interfaces:
dbus-send --system --dest=org.freedesktop.DBus --type=method_call --print-reply /org/freedesktop/DBus org.freedesktop.DBus.Introspectable.Introspect
This question is very similar to:
https://unix.stackexchange.com/questions/203410/how-to-list-all-object-paths-under-a-dbus-service
but it requires to use some utilities that are not available to me.
I use a closed embedded system and I cannot install anything, so I cannot use any of the following utilities:
- qdbusviewer
- qdbus
- d-feet
- python
Luis
(21 rep)
May 12, 2022, 07:31 AM
• Last activity: Jun 9, 2025, 06:04 AM
0
votes
2
answers
115
views
Split named pipe into two ends: One read only, one write only
I'm trying to set up some inter-process communication, and as a first idea I want to use a named pipe to forward messages from one process to the other. The thing I'm getting stuck on is ensuring that the first process only ever writes to the pipe, and the second process only ever reads from the pip...
I'm trying to set up some inter-process communication, and as a first idea I want to use a named pipe to forward messages from one process to the other. The thing I'm getting stuck on is ensuring that the first process only ever writes to the pipe, and the second process only ever reads from the pipe. Now, normally I could just write the program for the first process to only ever write to the pipe, and vice versa for the second process, but I'm somewhat paranoid and want the program to throw an error if it ever even tries to open the pipe in the wrong mode.
Essentially, what I'm trying to do is to "split" the pipe in half, so that the write-end is with my first program:
program1/
├── write-end-of-pipe
└── program1.c
and so that my read-end is with my second program:
program2/
├── read-end-of-pipe
└── program2.c
Ideally there could be actual files on the file system for the read/write ends of the pipe.
-----
Right now, my best solution is the following:
1. Create a named pipe in program1
and change the file permissions to be write only by this user:
cd ~/program1
mkfifo write-end-of-pipe
chmod 200 write-end-of-pipe
2. Create a named pipe in program2
and change the file permissions to be read only by this user:
cd ~/program2
mkfifo read-end-of-pipe
chmod 400 read-end-of-pipe
3. Create a new user for forwarding, and have it run a daemon that forwards from the first named pipe to the second:
(Assuming the user is already created:)
su - forwarding-daemon-user
chmod 400 ~/program1/write-end-of-pipe
chmod 200 ~/program2/read-end-of-pipe
cat write-end-of-pipe > read-end-of-pipe &
-----
The problem I have with this is twofold:
1. It requires two pipes instead of just one.
2. It requires a daemon. (My cat
command earlier is not robust enough on its own, but it gets the point across.)
Is there a better way to create two files for a pipe, one of which can only be written to, and the other only read?
-----
Edit 1: For some context on what I want to use this inter-process communication for: I think it would be cool if I could have a "screen-share" where one process written in one language creates a screen (essentially a video stream), and another process in an entirely different language displays the screen it receives. (Example use case: One language makes it easy to render a game, but you want to write your GUI in another language.) The reason for my paranoia about reading/writing is that I want to be able to swap out different programs that create screens or view screens, and I don't want to worry about connecting two screen creators together (if I provide a read end and a write end, then one program gets the wrong end if I make such a mistake, and the error is caught immediately instead of later when a pipe overflows) or otherwise broken programs causing silent problems.
Edit 2: I feel like I should give an explanation why having two separate users for the reading/writing is not an ideal solution for me. The main reason is that it is a significant step on the path towards containerization. Once you say, "I'll just control permissions via separate users/namespaces," you need to figure out how you are going to *set up those users/namespaces*, and that essentially means putting the program in a container. I want to avoid this because it's not as flexible; suddenly every program now needs a standardized way of being invoked, the named pipe connecting the two programs needs to be created by a special user that can peer into these containers, etc.
Hence, I would prefer to avoid multiple users, and similar techniques that require wrapping the program to be called. A solution that *would* work well would be if there was some way to "mount" the named pipe so that it has two different file permissions from the two different locations to be written to/read from, but I don't think this is possible on Linux.
Joseph Camacho
(109 rep)
Feb 28, 2025, 06:16 AM
• Last activity: Mar 10, 2025, 01:52 PM
29
votes
8
answers
37360
views
How to list all object paths under a dbus service?
This is a follow-up question to https://unix.stackexchange.com/questions/46301/a-list-of-available-dbus-services. The following python code will list all available DBus services. import dbus for service in dbus.SystemBus().list_names(): print(service) How do we list out the object paths under the se...
This is a follow-up question to https://unix.stackexchange.com/questions/46301/a-list-of-available-dbus-services .
The following python code will list all available DBus services.
import dbus
for service in dbus.SystemBus().list_names():
print(service)
How do we list out the object paths under the services in python? It is ok if the answer does not involve python although it is preferred.
I am using Ubuntu 14.04
user768421
(483 rep)
May 14, 2015, 03:04 PM
• Last activity: Jan 15, 2025, 05:19 PM
0
votes
1
answers
509
views
Communicate with UNIX sockets opened by dbus-daemon
I'm trying to learn how to interact with message buses under Linux, and doing so with shell commands that doesn't involve utilities packaged under dbus-*. For this instance, I want to understand how socket opened by dbus-daemon are interpreted, and how the socket behaves in response to some user inp...
I'm trying to learn how to interact with message buses under Linux, and doing so with shell commands that doesn't involve utilities packaged under dbus-*. For this instance, I want to understand how socket opened by dbus-daemon are interpreted, and how the socket behaves in response to some user input. (e.g, send commands and receive a response output)
For the "reference implementation", namely
dbus-daemon
, looking at its user configuration file (session.conf
) under /usr/share/dbus-1
, the daemon opens an UNIX domain socket and listens on in based on transport names and subsequent options in the ` directive. By default, it is set to the transport name
unix with a
tmpdir set to
/tmp. Modifying the line with
path in place of
tmpdir to where the socket must reside in disk, a socket file by the respective path is created by running
dbus-deamon` as:
$ sudo dbus-daemon --session
By querying lsof
to list open sockets
sudo lsof -c dbus-daemon -aU -a +E
it indeed lists the path that was written on the configuration file. Now, I want to interact with the socket by sending it some data and inspecting its response. So far, I've tried netcat and socat.
$ echo | nc -U /tmp/socket -v
$ echo | socat - /tmp/socket
2024/12/12 22:55:47 socat E read(5, 0x61fb9e544000, 8192): Connection reset by peer
While netcat returns with nothing, I get a "connection reset by peer" error from socat when "" is being piped out and it is a string of words. Instead, if the input is a newline or a blank, socat exits successfully (however with no output). It makes me suspect a formatting problem of the input that the socket may accept.
I also tried configuring the daemon to open a TCP connection in the loopback address on port randomized by the kernel by setting port=0
(as mentioned in the manpage). Going by the output of lsof, I resolved the port which it's listening on.
$ echo | nc localhost 39231
nc: connect to localhost (::1) port 39231 (tcp) failed: Connection refused
Connection to localhost (127.0.0.1) 39231 port [tcp/*] succeeded!
Returns with connection refused.
It would be of help if anyone can explain:
1. The command interface for interacting with listening sockets
2. The right approach to send data to the socket
3. If it's possible to get a reply from the socket.
PatXio
(1 rep)
Dec 12, 2024, 07:26 PM
• Last activity: Dec 13, 2024, 06:16 AM
1
votes
0
answers
34
views
What is needed to enable RPMsg IPC on SoC
I am working with an Intel SoC with a Hard Processor System (HPS), an Arm 64 core, and an FPGA fabric with a Nios soft processor. I would like to implement message passing between these two processors using RPMsg. Intel has a hardware mailbox IP which we have connected appropriately, and dual-port o...
I am working with an Intel SoC with a Hard Processor System (HPS), an Arm 64 core, and an FPGA fabric with a Nios soft processor. I would like to implement message passing between these two processors using RPMsg. Intel has a hardware mailbox IP which we have connected appropriately, and dual-port on-chip RAM has been instantiated in the hardware design and connected to the both the HPS and Nios. My understanding is that I need to develop a *remoteproc* driver which incorporates the mailbox notification setup, and more importantly, creates the resource table and instantiates the appropriate Virtio structures to enable the IPC. Overall, I would like:
- Statically allocate the necessary virtio resources (vrings, message buffers) in the shared OCRAM.
- Incorporate the platform specific mailbox driver (drivers/mailbox/mailbox-altera.c).
- Create an interface for RPMsg (probably a char device, expose a device file to userspace for reading and writing).
I have been digging in the kernel source, and I can't figure out what parts of this framework are platform agnostic and available for my use, and which components are board specific. My questions are:
1. Since I don't actually need to perform any life cycle management of the remote processor (Nios), I just need remoteproc to handle the virtio resources. Which
rproc_ops
do I need to implement?
2. How would I go about allocating my virtio resources statically in specific physical memory regions within my remoteproc driver? Do I need to make carveouts in the resource table, or is there another way to just force the virtio resources to be in the OCRAM? How is this communicated between the processors?
3. Assuming I have my remoteproc set up sufficiently, can I use /drivers/rpmsg/rpmsg_char.c off the shelf? Or do I need to create a different RPMsg client?
4. In general, what kernel source files in this whole framework are platform agnostic (and available for me to use)? I can't tell
The Nios will be running a RTOS with OpenAMP or rpmsg-lite, but I'll cross that bridge after I deal with the kernel side.
Any guidance would be greatly appreciated!
user667370
(11 rep)
Oct 30, 2024, 09:35 PM
7
votes
1
answers
697
views
FIFO capture using cat not working as intended?
Hi I am trying to use a Unix FIFO to communicate between a Python script and a shell script. The intention is for the shell script to capture all output of the python script. So far I have the following: ```bash #!/bin/bash # Test IPC using FIFOs. ## Create a named pipe (FIFO). mkfifo ./myfifo ## La...
Hi I am trying to use a Unix FIFO to communicate between a Python script and a shell script. The intention is for the shell script to capture all output of the python script. So far I have the following:
#!/bin/bash
# Test IPC using FIFOs.
## Create a named pipe (FIFO).
mkfifo ./myfifo
## Launch the Python script asynchronously and re-direct its output to the FIFO.
python3 helloworld.py > ./myfifo &
PID_PY=$!
echo "Python script (PID=$PID_PY) launched."
## Read from the FIFO using cat asynchronously.
## Note that running asynchronously using & runs it the program (in this case cat
)
## in a child shell "subshell", so I will collect the output in a file.
echo "Reading FIFO."
>output.log cat ./myfifo &
PID_CAT=$!
## Sleep for 10 seconds.
sleep 10
## Kill the Python script.
kill -15 $PID_PY && echo "Python script (PID=$PID_PY) killed."
## Kill the cat!
kill -15 $PID_CAT
## Remove the pipe when done.
rm -fv ./myfifo
## Check for the existence of the output log file and print it.
[[ -f output.log ]] && cat output.log || echo "No logfile found!." 1>&2
However when I open the log file output.log
, it is empty which is why the last command returns empty. Is there something I am doing wrong. I understand the above might be easily accomplished using an anonymous pipe like so: python3 helloworld.py | cat >output.log
(or even python3 helloworld.py > output.log
for that matter) but my intention is to understand the use of named pipes in Unix/Linux.
The python script just prints something to stdout
every 1 second:
if __name__ == "__main__":
import time
try:
while True:
print("Hello, World")
time.sleep(1)
except KeyboardInterrupt:
print('Exiting.')
finally:
pass
First User
(345 rep)
Sep 25, 2024, 08:58 AM
• Last activity: Sep 25, 2024, 09:23 AM
2
votes
1
answers
422
views
Can FIFO or other thing not block on writer's access, and instead just drop data?
FIFO is problematic in use because both reader and writer have to open it – if one of them is being late, the other one is blocked inside the operating system. I have to implement a publishing mechanism – a program publishes its logs, and if anyone "cares" to listen i.e. opens the publishing channel...
FIFO is problematic in use because both reader and writer have to open it – if one of them is being late, the other one is blocked inside the operating system.
I have to implement a publishing mechanism – a program publishes its logs, and if anyone "cares" to listen i.e. opens the publishing channel, he receives the messages. If no one "cares", the messages vanish, no problem. Support for no more than single listener – also no problem. What can I use?
Digger
(23 rep)
May 20, 2018, 12:56 PM
• Last activity: Jul 3, 2024, 04:39 PM
0
votes
1
answers
200
views
How to increase kernel parameter (`msgmnb`) for a systemd-nspawn container
I have a `systemd-nspawn` container in which I am trying to change the kernel parameter for `msgmnb`. When I try to change the kernel parameter by directly writing to the `/proc` filesystem or using `sysctl` inside the systemd-nspawn container, I get an error that the `/proc` file system is read onl...
I have a
systemd-nspawn
container in which I am trying to change the kernel parameter for msgmnb
. When I try to change the kernel parameter by directly writing to the /proc
filesystem or using sysctl
inside the systemd-nspawn container, I get an error that the /proc
file system is read only.
[From the arch wiki I see this relevant documentation](https://wiki.archlinux.org/title/systemd-nspawn#:~:text=systemd%2Dnspawn%20limits%20access%20to,nodes%20may%20not%20be%20created.)
systemd-nspawn limits access to various kernel interfaces in the container to read-only, such as /sys, /proc/sys or /sys/fs/selinux. Network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container.
I thought the container would inherit some properties of /proc
from the host, including the kernel parameter value for msgmnb
, but this does not appear to be the case as the host and container have different values for msgmnb
.
The kernel parameter value in the container:
cat /proc/sys/kernel/msgmnb
16384
Writing to the proc filesystem inside the container
$ bash -c 'echo 2621440 > /proc/sys/kernel/msgmnb'
bash: /proc/sys/kernel/msgmnb: Read-only file system
For completeness, I also tried sysctl in the container:
# sysctl -w kernel.msgmnb=2621440
sysctl: setting key "kernel.msgmnb": Read-only file system
I thought this value would be inherited from the host system. I set the value on the host, rebooted and re-created my container. The container (even new ones) maintains the value of 16384
.
# On the host
$ cat /proc/sys/kernel/msgmnb
2621440
I've also tried using unprivileged the -U
flag when booting the systemd-nspawn container but I get the same results.
I've also tried to editted /etc/sysctl.conf
in the container tree to include this line before booting the container:
kernel.msgmnb=2621440
I also looked into https://man7.org/linux/man-pages/man7/capabilities.7.html and noticed CAP_SYS_RESOURCE
which has a line that reads:
CAP_SYS_RESOURCE
...
raise msg_qbytes limit for a System V message queue
above the limit in /proc/sys/kernel/msgmnb (see
msgop(2) and msgctl(2));
Using sudo systemd-nspawn --capability=CAP_SYS_RESOURCE -D /path/to/container
, and then inside the container, when I use msgctl
with IPC_SET
and pass msqid_ds->msg_qbytes
with a value that is higher than what is in /proc/sys/kernel/msgmnb
, the syscall returns an error code. It seemed like passing the CAP_SYS_RESOURCE
should work here?
Nothing I've tried here has changed the value for msgmnb
in the container. I can't seem to find documentation on how to achieve my goal.
I'd appreciate any help - thank you!
EDIT:
Trying to determine if the process calling msgctl
has the capability. Here is what I found:
$ cat /proc/6211/status | grep -i Cap
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 00000000fdecafff
CapAmb: 0000000000000000
$ capsh --decode=00000000fdecafff
0x00000000fdecafff=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_raw,cap_ipc_owner,cap_sys_chroot,cap_sys_ptrace,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap
Yeow_Meng
(419 rep)
Jun 5, 2024, 02:51 PM
• Last activity: Jun 5, 2024, 07:10 PM
5
votes
2
answers
9441
views
Creating a terminal device for interprocess communication
I'd like to know how to create a terminal device to simulate a piece of hardware connected through a serial port. Basically, tty device with a certain baud rate that can be read from and written to between two processes. From what I understand, a psuedo-terminal is what I'm looking for, and the `mak...
I'd like to know how to create a terminal device to simulate a piece of hardware connected through a serial port. Basically, tty device with a certain baud rate that can be read from and written to between two processes. From what I understand, a psuedo-terminal is what I'm looking for, and the
makedev
can apparently make one.
I've also found the following set of instructions:
su to root
cd /dev
mkdir pty
mknod pty/m0 c 2 0
mknod pty/s0 c 3 0
ln -s pty/m0 ttyp0
ln -s pty/s0 ptyp0
chmod a+w pty/m0 pty/s0
Is there a better way of making pseudo-terminal, or is this pretty much the standard way of make one in the shell?
sj755
(1135 rep)
May 30, 2013, 04:20 PM
• Last activity: May 6, 2024, 08:18 PM
0
votes
1
answers
174
views
What is a{sv}(oayays)b dbus signature
I'm trying to invoke `CreateItem` method on `org.freedesktop.secrets` dbus service. ``` busctl --user call org.freedesktop.secrets /org/freedesktop/secrets/collection/login org.freedesktop.Secret.Collection CreateItem "a{sv}(oayays)b" ``` How can I figure out what kind of arguments to pass for `a{sv...
I'm trying to invoke
CreateItem
method on org.freedesktop.secrets
dbus service.
busctl --user call org.freedesktop.secrets /org/freedesktop/secrets/collection/login org.freedesktop.Secret.Collection CreateItem "a{sv}(oayays)b"
How can I figure out what kind of arguments to pass for a{sv}(oayays)b
signature.
a001
(3 rep)
Jan 4, 2024, 06:51 AM
• Last activity: Jan 4, 2024, 04:42 PM
1
votes
4
answers
10497
views
What after exec() in ls command: Is the parent process printing the output to the console or the child?
I have a simple doubt on execution of the command `ls`. As per my understanding from the research I have done on the internet, I understood the below points. 1. When we type `ls` command shell interprets that command. 2. Then the shell process forks and creates the child process and the parent (shel...
I have a simple doubt on execution of the command
ls
. As per my understanding from the research I have done on the internet, I understood the below points.
1. When we type ls
command shell interprets that command.
2. Then the shell process forks and creates the child process and the parent (shell) executes the wait()
system call, effectively putting itself to sleep until the child exits.
3. Child process inherits all the open file descriptors and the environment.
4. The child process (shell) executes an exec()
of the ls
program, causing the ls
binary being loaded from the disk (filesystem) and being executed in the same process.
5. When the ls
program runs to completion, it calls exit()
, and the kernel sends a signal to its parent indicating the child has terminated.
My doubt starts from here onwards, as soon as ls
finishes its tasks; does it send the result back to the parent process, or does it display the output to the screen?
If it sends the output back to parent, then is it using pipe()
implicitly?
Subi Suresh
(513 rep)
Mar 14, 2013, 06:28 PM
• Last activity: Nov 28, 2023, 04:46 PM
109
votes
6
answers
118608
views
A list of available D-Bus services
Is there such a thing as list of available D-Bus services? I've stumbled upon a few, like those provided by NetworkManager, Rhythmbox, Skype, HAL. I wonder if I can find a rather complete list of provided services/interfaces.
Is there such a thing as list of available D-Bus services?
I've stumbled upon a few, like those provided by NetworkManager, Rhythmbox, Skype, HAL.
I wonder if I can find a rather complete list of provided services/interfaces.
madfriend
(1193 rep)
Aug 25, 2012, 12:06 PM
• Last activity: Nov 11, 2023, 02:41 PM
-2
votes
1
answers
323
views
Are there any plans for Linux to add higher-level things like Windows' WaitForMultipleObjects?
WaitForMultipleObjects is one of several Windows kernel functions that can suspend and synchronize a calling thread with other threads until resources or etc are available, similar to flock in Linux, but handles everything but file locking. WaitForMultipleObjects supports an array of events (can be...
WaitForMultipleObjects is one of several Windows kernel functions that can suspend and synchronize a calling thread with other threads until resources or etc are available, similar to flock in Linux, but handles everything but file locking.
WaitForMultipleObjects supports an array of events (can be a mixture of change notifications, console input, events, memory notifications, mutexes, processes, semaphores, threads, and timers), a timeout or polling option, and an AND/OR option and reports which fired first, and it can be used independently by multiple threads at once without knowledge of each other.
(I was looking for an IPC lock with timeout and things like using SIGALRM with flock where suggested which I can't risk using because SIGALRM might be in use in other multi-threaded libraries I don't have source to. I settled on using polling with LOCK_NB and tiny sleeps, and I am pretty sure I am not losing any "fair lock" benefits.)
Codemeister
(1 rep)
Jan 9, 2022, 06:13 PM
• Last activity: Aug 19, 2023, 09:41 AM
1
votes
1
answers
1312
views
strace for troubleshooting inter process communication
I have output captured by following command: `strace -f -e trace=process,socketpair,open,close,dup,dup2,read,write -o rsync.log rsync -avcz --progress src/ dst/` it is a bit long so I've uploaded it [here][1]. I understand basic format of `strace` output, for example following line: `1399 open("/lib...
I have output captured by following command:
strace -f -e trace=process,socketpair,open,close,dup,dup2,read,write -o rsync.log rsync -avcz --progress src/ dst/
it is a bit long so I've uploaded it here . I understand basic format of strace
output, for example following line:
1399 open("/lib/x86_64-linux-gnu/libpopt.so.0", O_RDONLY|O_CLOEXEC) = 3
Means that:
1. 1399
is PID of process
2. open(const char *pathname, int flags);
is system call with
particular arguments (taken from man 2 open
)
3. 3
is the return value, a file descriptor in this particular case (taken from
man 2 open
)
According to this thread:
> rsync
spawns two processes/threads to do the copy, and there's one
> stream data between the processes, and another from the receiving
> process to the target file.
>
> Using something like `strace -e
> trace=process,socketpair,open,read,write` would show some threads
> spawned off, the socket pair being created between them, and different
> threads opening the input and output files.
Can I somehow parse strace
output to being able confirm statements from mentioned thread and see what happens under the hood, even if I'm not very familiar with inter process communication? I'm especially interested in data passing between processes/threads (how much data were passed from process1 to process2? where does the process2 wrote the received data?)
I've also seen lines like this one in log but I do not know how to correctly interpret them:
1399 ) = 0
1400 ) = 0
Wakan Tanka
(825 rep)
Jun 22, 2016, 01:57 PM
• Last activity: Jun 13, 2023, 01:08 PM
0
votes
2
answers
1843
views
check is Process is Alive from PID while handling recycled PID
From what I seen online you call kill method in c++ in order to see if the process is alive. the issue with that is PID's get rycled and the same PID your looking for may not be the same process. I have a program that has two processes that are not children of each other. The only way to communicate...
From what I seen online you call kill method in c++ in order to see if the process is alive. the issue with that is PID's get rycled and the same PID your looking for may not be the same process. I have a program that has two processes that are not children of each other. The only way to communicate with them is IPC. I would like my host process to shutdown when the client process shuts down. In order to do that I have to know when the client's process is no longer alive.
In Windows, they have what's called a process handler that will reserve the PID from getting recycled until the process that created the handle is closed. I am wondering how to achieve this for macOS/Linux (POSIX) systems.
The problematic code as PID's are reycled.
if (0 == kill(pid, 0))
{
// Process exists.
}
joseph lake
(11 rep)
Jun 10, 2023, 09:03 AM
• Last activity: Jun 13, 2023, 04:09 AM
0
votes
1
answers
517
views
How to guarantee that that only a specific process reads from a named pipe?
Suppose that, at time (1), I create a named pipe using Python with the goal that eventually this Python process would write something to that named pipe. Why? Because, at time (2), there is another process that is expected to read from that named pipe. So, basically, it's IPC via named pipes. Why is...
Suppose that, at time (1), I create a named pipe using Python with the goal that eventually this Python process would write something to that named pipe. Why? Because, at time (2), there is another process that is expected to read from that named pipe.
So, basically, it's IPC via named pipes. Why is this neat? Because it looks like a file, so that the other process that can only read files, can be communicated to via this named pipe mechanism as a convenient IPC without needing to rewrite the other process.
**But there is a problem:** suppose that between time (1) and time (2), an evil process started reading from the named pipe 1st before that intended process. This way, my Python script may end up sending data to an unintended process. So I am not concerned if the hijacker starts writing to the process in my specific risk model (I'm only concerned about the hijacking reading from the pipe before the intended process).
**Question:** is there any mechanism to ensure no other process but the intended one reads from the IPC other than the intended process?
caveman
(173 rep)
Aug 12, 2020, 05:06 AM
• Last activity: Apr 19, 2023, 01:10 AM
0
votes
1
answers
1310
views
How to trace on continuously running process?
So I wanted to know how files are opened by `zsh` like `.xinitrc`, `.xprofile`, `.zprofile`, and exactly in which order. So I have decided to `strace` on `zsh` process with the `grep` command to see how the `open` system call is called and eventually I can decide and order how these files are loaded...
So I wanted to know how files are opened by
zsh
like .xinitrc
, .xprofile
, .zprofile
, and exactly in which order. So I have decided to strace
on zsh
process with the grep
command to see how the open
system call is called and eventually I can decide and order how these files are loaded.
My command:
strace zsh | grep open
but as soon as I ran this it kept showing me the output for zsh
and grep
is not working. when I end the process with ctrl+d
also nothing happens.
So is there any way to get this grep
output on this kind of process?
Visrut
(137 rep)
Mar 8, 2023, 12:17 PM
• Last activity: Mar 8, 2023, 12:51 PM
0
votes
1
answers
123
views
How does local socket IPC work on a multi CPU system?
There is the Supermicro X10DAi motherboard and the manual is [here][1]. On page 1-11 you can see that each CPU has it's own RAM. Let's say `program A` is offering an API through a local socket `/var/run/socketapi`. This program is started on CPU 1. Then there is `program B` connecting to this socket...
There is the Supermicro X10DAi motherboard and the manual is here . On page 1-11 you can see that each CPU has it's own RAM.
Let's say
program A
is offering an API through a local socket /var/run/socketapi
. This program is started on CPU 1.
Then there is program B
connecting to this socket and it's started on CPU 2.
When program B
writes a command to the socket the kernel normally copies the data from the memory space of program B
to that of program A
.
But because the programs run on different CPUs and the memory is not shared between CPUs there is a problem.
How is this solved under recent Linux? Maybe the whole memory of CPU 1 is memory mapped to CPU 2 using the QPI interface shown in the manual?
Or perhaps the program IPC won't work and an error occurs?
Please provide some reference to Linux source code or documentation.
zomega
(1012 rep)
Feb 25, 2023, 02:13 PM
• Last activity: Feb 25, 2023, 03:11 PM
0
votes
1
answers
1243
views
How to adjust the output of dbus-monitor?
I'm building a microservices application using the GNU tools and bash and I decided to use `dbus-monitor` and `dbus-send` for IPC between services. The problem is that it's hard to make use of the messages received by `dbus-monitor` since it splits metadata and payload in different lines. If I insta...
I'm building a microservices application using the GNU tools and bash and I decided to use
dbus-monitor
and dbus-send
for IPC between services.
The problem is that it's hard to make use of the messages received by dbus-monitor
since it splits metadata and payload in different lines.
If I instantiate a listener with
dbus-monitor --system interface=org.foo.bar member=test \
| while read a; do
echo got message $a
done
and communicate to it with
dbus-send --system --type=signal / org.foo.bar.test string:"hello world"
The output comes as
got line signal time=1676042614.782238 path=; interface=org.foo.bar; member=test
got line string "hello world"
even though string "hello world"
is the payload to the message, it was written in a different line.
I tried messing with IFS but no success. I also tried to change parameters to dbus-monitor
, but the parameter --profile
omits the payload. So I dont know how to solve this.
CyberWizard
(1 rep)
Feb 10, 2023, 03:37 PM
• Last activity: Feb 12, 2023, 12:32 PM
2
votes
1
answers
204
views
What IPC is used between an application and a library in Linux?
When you have a Linux application that depends on a library (dynamically-linked), how does the application communicate with the library? What [inter-process communication][1] method is used? [1]: https://en.wikipedia.org/wiki/Inter-process_communication
When you have a Linux application that depends on a library (dynamically-linked), how does the application communicate with the library? What inter-process communication method is used?
Noob_Guy
(224 rep)
Jan 15, 2023, 09:05 AM
• Last activity: Jan 15, 2023, 09:21 AM
Showing page 1 of 20 total questions