Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
8957
views
rhel 7 setting stack size to unlimited
I have some old code that needs the stack to not be limited to 8192kb in order for it to run. I am used to doing this in `/etc/security/limits.conf` * stack hard unlimited * stack soft unlimited However in RHEL 7.9 having a local account with a bash shell when I do a `ulimit -s` it still responds wi...
I have some old code that needs the stack to not be limited to 8192kb in order for it to run.
I am used to doing this in
/etc/security/limits.conf
* stack hard unlimited
* stack soft unlimited
However in RHEL 7.9 having a local account with a bash shell when I do a ulimit -s
it still responds with 8192
. So my modification of limits.conf seems to have no affect?
In my terminal window having a bash shell if I do a ulimit -s unlimited
first then run my code, my code runs fine.
What is the best way to set stack size to unlimited, globally for all users in RHEL 7.9 ?
Am I missing something, is ulimit
and /etc/security/limits.conf
not the same thing?
ron
(8647 rep)
Dec 7, 2020, 06:55 PM
• Last activity: May 3, 2025, 09:09 PM
2
votes
1
answers
127
views
Dump bash stack trace on error with function parameters
With the function ```bash function fail() { local msg="$*" echo $msg at for i in ${!FUNCNAME[@]}; do echo " ${FUNCNAME[$i]} ${BASH_SOURCE[$i]}:${BASH_LINENO[$i]}" done exit 1 } ``` I get a nice stack trace when its called to exit. It would be even more informative if I could get the parameters of th...
With the function
function fail() {
local msg="$*"
echo $msg at
for i in ${!FUNCNAME[@]}; do
echo " ${FUNCNAME[$i]} ${BASH_SOURCE[$i]}:${BASH_LINENO[$i]}"
done
exit 1
}
I get a nice stack trace when its called to exit. It would be even more informative if I could get the parameters of the functions on the call stack. Is that possible?
Harald
(1030 rep)
Apr 10, 2025, 08:59 AM
• Last activity: Apr 17, 2025, 11:54 AM
1
votes
1
answers
118
views
How does the linux kernel know where to put its heap?
When setting up dynamic memory allocation, the Linux kernel has got to choose a place to put its heap, no? How does it avoid overriding its own stack or the stack growing and overriding the heap later?
When setting up dynamic memory allocation, the Linux kernel has got to choose a place to put its heap, no? How does it avoid overriding its own stack or the stack growing and overriding the heap later?
CocytusDEDI
(133 rep)
Apr 3, 2025, 10:16 AM
• Last activity: Apr 3, 2025, 11:58 AM
2
votes
1
answers
561
views
Is it safe to use the .bss section as a static stack?
(This is in the context of x86-64 Linux.) I am trying to write a high-reliability userland executable, and I have total control over the generated assembly. I don't want to rely on automatic stack allocation, so I would like to put the stack in a known location. Suppose I have calculated that my pro...
(This is in the context of x86-64 Linux.)
I am trying to write a high-reliability userland executable, and I have total control over the generated assembly. I don't want to rely on automatic stack allocation, so I would like to put the stack in a known location. Suppose I have calculated that my program uses at most 414 bytes of stack space (exactly). Is it safe to allocate 414 bytes in the .bss section and point RSP to the top? I want to ensure that no bytes outside this region are touched by stack management at any point.
While I can be sure that *my* program won't write outside the region, I need to make some syscalls (using the
syscall
instruction), and I think at least some part of the kernel code operates in the calling executable context. Will it smash my stack?
Also, interrupts can happen at any point in the program, and the story behind the "Red Zone" seems to suggest that an arbitrarily large region beyond RSP-128 can be written to at will by interrupt handlers, possibly mangling my data. What kinds of guarantees do I have about this behavior?
Mario Carneiro
(245 rep)
Jan 28, 2020, 09:40 AM
• Last activity: Mar 25, 2025, 08:47 AM
0
votes
0
answers
30
views
Host docker stacks for reverse proxy as traefik, and observability/logging monitoring stacks
First of all, I tried to keep this post as clear as possible, and looked on stack websites for the below stack composes, but I failed to assemble them together in a sense to provide a standalone monitoring tool. Since there are many talented people on the forum, this exercise of fixing the tool will...
First of all, I tried to keep this post as clear as possible, and looked on stack websites for the below stack composes, but I failed to assemble them together in a sense to provide a standalone monitoring tool. Since there are many talented people on the forum, this exercise of fixing the tool will be educative to many of us, including myself obviously. It is a quite big configuration file to bring swarm stacks together, and there are several moving parts on these files, like the definition of DNS Domains for services and e-mail notification.
**Question**: How to wire the services up correctly to make them gather information through Loki and Prometheus and display on grafana?
**Instructions**
- Consider a docker swarm initialization.
1. Create a network with command run
docker network create --driver overlay "my_network"
;
2. Initialize docker swarm with command run docker swarm init
;
3. Register required domains traefik.example.com
, jaeger.example.com
, node.example.com
, grafana.example.com
, kibana.example.com
amd cadvisor.example.com
on some DNS provider of your choice and substitute on respective places;
4. Deploy boths stacks with command run docker stack deploy -c traefik.yml && docker stack deploy -c monitor.yml monitor
- For a non-swarm scenario, except for initializing the swarm, and deploy the traefik stack. For deploying the monitor stack, in this case, one must remove the property deploy
on each service and run the command docker compose -f monitor.yml up -d
.
Docker compose file traefik.yml
:
services:
traefik:
image: traefik:v2.11.2
command:
- "--api.dashboard=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=my_network"
- "--entrypoints.web.address=:80"
- "--entrypoints.web.http.redirections.entryPoint.to=websecure"
- "--entrypoints.web.http.redirections.entryPoint.scheme=https"
- "--entrypoints.web.http.redirections.entrypoint.permanent=true"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.web.transport.respondingTimeouts.idleTimeout=3600"
- "--certificatesresolvers.letsencryptresolver.acme.httpchallenge=true"
- "--certificatesresolvers.letsencryptresolver.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.letsencryptresolver.acme.storage=/etc/traefik/letsencrypt/acme.json"
- "--certificatesresolvers.letsencryptresolver.acme.email=example@mail.com"
- "--certificatesresolvers.letsencryptresolver.acme.dnschallenge=true"
- "--log.level=DEBUG"
- "--log.format=common"
- "--log.filePath=/var/log/traefik/traefik.log"
- "--accesslog=true"
- "--accesslog.filepath=/var/log/traefik/access-log"
- "--metrics.prometheus=true"
- "--metrics.prometheus.buckets=0.1,0.3,1.2,5.0"
- "--tracing.jaeger=true"
- "--tracing.jaeger.samplingType=const"
- "--tracing.jaeger.samplingParam=1"
- "--tracing.jaeger.localAgentHostPort=jaeger:6831"
volumes:
- "vol_certificates:/etc/traefik/letsencrypt"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- my_network
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8082 # Expose Prometheus metrics
published: 8082
mode: host
deploy:
placement:
constraints:
- node.role == manager
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=Host(traefik.example.com
)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls.certresolver=letsencryptresolver"
- "traefik.http.services.dummy-svc.loadbalancer.server.port=9999"
- "traefik.http.routers.dashboard.middlewares=myauth"
- "traefik.http.middlewares.myauth.basicauth.users=test:$$2y$$05$$Y7ORfBbtpI9vSMAQxCvkzOqmKfd4NZBGwSFv/AOKZNdvddMHUpd4." # hashed credentials test:test
volumes:
vol_shared:
external: true
name: volume_swarm_shared
vol_certificates:
external: true
name: volume_swarm_certificates
networks:
my_network:
external: true
attachable: true
Docker compose file monitor.yml
services:
prometheus:
image: prom/prometheus:v2.48.0
restart: unless-stopped
volumes:
- ./prometheus-config.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
ports:
- "9090:9090"
command:
- --config.file=/etc/prometheus/prometheus.yml
deploy:
mode: replicated
replicas: 1
labels:
- traefik.enable=true
- traefik.http.routers.node-exporter.rule=Host(prometheus.example.com
)
- traefik.http.services.node-exporter.loadbalancer.server.port=9090
- traefik.http.routers.node-exporter.service=node-exporter
- traefik.http.routers.node-exporter.tls.certresolver=letsencryptresolver
- traefik.http.routers.node-exporter.entrypoints=websecure
- traefik.http.routers.node-exporter.tls=true
node-exporter:
image: prom/node-exporter:latest
restart: unless-stopped
deploy:
mode: replicated
replicas: 1
labels:
- traefik.enable=true
- traefik.http.routers.node-exporter.rule=Host(node.example.com
)
- traefik.http.services.node-exporter.loadbalancer.server.port=9100
- traefik.http.routers.node-exporter.service=node-exporter
- traefik.http.routers.node-exporter.tls.certresolver=letsencryptresolver
- traefik.http.routers.node-exporter.entrypoints=websecure
- traefik.http.routers.node-exporter.tls=true
networks:
- my_network
ports:
- "9100:9100"
loki:
image: grafana/loki:2.9.2
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
logging:
driver: json-file
networks:
- my_network
jaeger:
image: jaegertracing/all-in-one:1.51
ports:
- '6831:6831'
- '16686:16686'
deploy:
mode: replicated
replicas: 1
labels:
- "traefik.enable=true"
- "traefik.http.routers.jaeger.rule=Host(jaeger.example.com
)"
- "traefik.http.routers.jaeger.entrypoints=websecure"
- "traefik.http.routers.jaeger.tls.certresolver=letsencryptresolver"
- "traefik.http.services.jaeger.loadbalancer.server.port=16686"
promtail:
image: grafana/promtail:2.9.2
volumes:
- ./promtail-config.yml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
networks:
- my_network
grafana:
environment:
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
entrypoint:
- sh
- -euc
- |
mkdir -p /etc/grafana/provisioning/datasources
cat /etc/grafana/provisioning/datasources/ds.yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: http://loki:3100
basicAuth: false
isDefault: false
version: 1
editable: true
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus:9090
basicAuth: false
isDefault: true
version: 1
editable: true
EOF
/run.sh
image: grafana/grafana:latest
deploy:
mode: replicated
replicas: 1
resources:
limits:
memory: 512M
cpus: "0.5"
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.rule=Host(grafana.example.com
)"
- "traefik.http.routers.grafana.entrypoints=websecure"
- "traefik.http.routers.grafana.service=grafana"
- "traefik.http.routers.grafana.tls.certresolver=letsencryptresolver"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
logging:
driver: json-file
ports:
- "3000:3000"
networks:
- my_network
depends_on:
- prometheus
- loki
elasticsearch:
image: elasticsearch:7.8.1
ports:
- 9200:9200
environment:
discovery.type: 'single-node'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'Teste!12345'
ES_JAVA_OPTS: '-Xmx2g -Xms2g'
kibana:
image: docker.elastic.co/kibana/kibana:7.15.1
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- '5601:5601'
depends_on:
- elasticsearch
deploy:
mode: replicated
replicas: 1
labels:
- "traefik.enable=true"
- "traefik.http.routers.kibana.rule=Host(kibana.example.com
)"
- "traefik.http.routers.kibana.entrypoints=websecure"
- "traefik.http.routers.kibana.service=kibana"
- "traefik.http.routers.kibana.tls.certresolver=letsencryptresolver"
- "traefik.http.services.kibana.loadbalancer.server.port=5601"
logging:
driver: json-file
cadvisor:
image: gcr.io/cadvisor/cadvisor
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/lib/docker/:/var/lib/docker:ro
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
labels:
- traefik.enable=true
- traefik.http.routers.cadvisor.rule=Host(cadvisor.example.com
)
- traefik.http.services.cadvisor.loadbalancer.server.port=8080
- traefik.http.routers.cadvisor.service=cadvisor
- traefik.http.routers.cadvisor.tls.certresolver=letsencryptresolver
- traefik.http.routers.cadvisor.entrypoints=websecure
- traefik.http.routers.cadvisor.tls=true
networks:
- my_network
ports:
- "8181:8080"
volumes:
prometheus_data:
networks:
my_network:
Prometheus configuration file prometheus-config.yml
:
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
scheme: http
timeout: 10s
api_version: v2
scrape_configs:
- job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['prometheus:9090','cadvisor:8080','node-exporter:9100']
Promtail configuration file promtail-config.yml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*.log
- job_name: docker
static_configs:
- targets:
- localhost
labels:
job: docker
__path__: /var/lib/docker/containers/*/*.log
Kibana configuration file kibana-config.yaml
# To allow connections from remote users, set this parameter to a non-loopback address.
server.name: kibana
server.host: "0.0.0.0"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://elasticsearch:9200 "]
monitoring.ui.container.elasticsearch.enabled: true
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "elastic"
elasticsearch.password: "Teste!12345"
# Custom configurations
logging.verbose: true
Bruno Lobo
(201 rep)
Jan 17, 2025, 03:55 PM
• Last activity: Jan 17, 2025, 05:30 PM
1
votes
3
answers
133
views
bash - isolating uppercase words
So, I have a directory containing around 50 directories having for name 3 letter uppercase words: AXC BCC EFC amongst other directories. I have already done a find to seek these 3 letter directories and store them in a list list=`find /data/opr/ucansit/ -type d -name "???"` The current output i get...
So, I have a directory containing around 50 directories having for name 3 letter uppercase words:
AXC BCC EFC
amongst other directories.
I have already done a find to seek these 3 letter directories and store them in a list
list=
find /data/opr/ucansit/ -type d -name "???"
The current output i get from an echo ${list[@]}
is
/data/opr/ucansit/CUG /data/opr/ucansit/TGV /data/opr/ucansit/PAS
what i need now is seek the 3 letter directory names and store them in a stack as:
CUG
TGV
PAS
Note: I just need the names of the directories (the 3 letters)
BRegards,
scandalous
(197 rep)
Mar 24, 2016, 08:55 AM
• Last activity: Dec 20, 2024, 11:55 AM
1
votes
1
answers
150
views
Why is the stack segment not explicit in ELF files?
Everything mapped in memory is explicit in ELF files *except* the stack segment. The stack segment is mapped automatically. Why is the stack segment not like other segments, with explicit settings in ELF files? Some programs might want a specific stack size that does not necessarily match the limit...
Everything mapped in memory is explicit in ELF files *except* the stack segment. The stack segment is mapped automatically.
Why is the stack segment not like other segments, with explicit settings in ELF files?
Some programs might want a specific stack size that does not necessarily match the limit set by
ulimit -s
. So they can't use the automatically allocated stack. The program should know better than the user how much stack memory it actually needs.
Some programs might not need a stack at all. For example garbage collected languages may want to allocate their stack frames on the heap.
Wouldn't it be simpler and better for the stack segment to be explicit?
The stack could either be a segment in the ELF file (perhaps with an "automatically growable" bit set), or it could be mmap
ed by the process itself at startup.
Tomek Czajka
(121 rep)
Dec 16, 2024, 01:16 PM
• Last activity: Dec 18, 2024, 12:41 PM
0
votes
1
answers
192
views
How to change the kernel stack size for the kernel modules in Ubuntu 14.04?
I want to change the kernel stack size of Ubuntu 14.04 for the kernel modules. However, with `menuconfig`, or `.config` I couldn't find the `CONFIG_THREAD_STACK_SIZE` option. I just want to increase the kernel stack size for my kernel modules which use callbacks each other. What can I do?
I want to change the kernel stack size of Ubuntu 14.04 for the kernel modules. However, with
menuconfig
, or .config
I couldn't find the CONFIG_THREAD_STACK_SIZE
option.
I just want to increase the kernel stack size for my kernel modules which use callbacks each other.
What can I do?
lee yk
(11 rep)
Nov 11, 2024, 09:22 AM
• Last activity: Nov 14, 2024, 02:09 PM
2
votes
2
answers
813
views
Main stacks in Linux
What are the main stacks in Linux? What I mean is, for example when an interrupt occurs what stack will be used for it, and what is the difference between user process and kernel process stacks?
What are the main stacks in Linux? What I mean is, for example when an interrupt occurs what stack will be used for it, and what is the difference between user process and kernel process stacks?
Nik Novák
(279 rep)
Dec 29, 2015, 06:23 PM
• Last activity: Oct 23, 2024, 08:27 AM
1
votes
0
answers
24
views
Retrieving the process descriptor during syscall
In Linux, there is a per-process kernel stack that stores at the bottom of it (or top if the stack grows upwards) a small struct named thread_info, which in turn points to the task_struct of the related process. This way it is easy to retrieve the pointer to the process's descriptor when handling a...
In Linux, there is a per-process kernel stack that stores at the bottom of it (or top if the stack grows upwards) a small struct named thread_info, which in turn points to the task_struct of the related process. This way it is easy to retrieve the pointer to the process's descriptor when handling a syscall in kernel-mode.
But how does the kernel even switch to this per-process stack? In which step during the context switch, does the kernel check/store data about the underlying process?
Can someone please provide a good explaination of the steps involved during such user-space -> syscall -> kernel-space context switch?
A lot of sources online try to explain the workings of context switching, but most of them mainly say general concepts and not detailed explainations of the procedure.
Idan Rosenzweig
(11 rep)
Sep 4, 2024, 06:07 PM
0
votes
1
answers
147
views
Running command 'cat /proc/<pid>/stat | cut -d" " -f29' to get stack pointer , is always showing stack pointer as zero
Iam trying to get the stack pointer of some thread using the /proc/ /stat, whenever i run the command,`cat /proc/ /stat | cut -d" " -f29` i end up getting zero, but when i run `sudo cat /proc/ /stack` i do end up getting a value ``` [ ] worker_thread+0xb7/0x390 [ ] kthread+0x134/0x150 [ ] ret_from_f...
Iam trying to get the stack pointer of some thread using the /proc//stat, whenever i run the command,
cat /proc//stat | cut -d" " -f29
i end up getting zero, but when i run sudo cat /proc//stack
i do end up getting a value
[] worker_thread+0xb7/0x390
[] kthread+0x134/0x150
[] ret_from_fork+0x1f/0x40
not sure this is happening, any ideas why the stack pointer keeps being zero, thanks.
Hodgson Tetteh
(1 rep)
Jun 4, 2024, 02:39 PM
• Last activity: Jun 4, 2024, 06:27 PM
1
votes
2
answers
870
views
System calls involved in stack and heap allocation
In the process address space, there is the stack and the heap. When a function is called, or even when a local variable is declared, it uses the stack; the kernel must assign physical address and create the mapping of virtual to physical address; so, a system call should be involved here, what is go...
In the process address space, there is the stack and the heap. When a function is called, or even when a local variable is declared, it uses the stack; the kernel must assign physical address and create the mapping of virtual to physical address; so, a system call should be involved here, what is going on?
https://unix.stackexchange.com/questions/145557/how-does-stack-allocation-work-in-linux
The first answer says: "I've found that the stack grows without any system call (according to strace). So, this means that the kernel grows it automatically (this is what the "implicit" means above), i.e. without explicit mmap/mremap from the process." If it is the job of the kernel to "grow" the stack, why is a system call not involved?
https://unix.stackexchange.com/questions/411408/when-is-the-heap-used-for-dynamic-memory-allocation
For the heap, the first answer says: "A call to malloc does not necessarily result in a call to sbrk or mmap (depending on how Libc implements dynamic memory allocation) to expand the mappings, if a call to malloc can be satisfied by reusing previously freed memory areas." I guess this is the concept of free lists. Is this same concept used for stack allocation?
My basic grievance in both allocations is that physical memory must be allocated, and the mapping from VMA to physical must be created, so a system call should happen. I tried reading the book by Mel Gorman linked in the answer to the first question, but couldn't find anything meaningful to answer my question.
Dev Jain
(13 rep)
Jan 3, 2024, 11:00 AM
• Last activity: Jan 3, 2024, 07:33 PM
1
votes
1
answers
1074
views
Pactical limits to `ulimit -s 1048576`?
Linux allows `ulimit -s unlimited`, which allows programs to exhaust system memory and crash the computer. So, generally no good. But what are the drawbacks to a significantly higher limit to the current 8192 default, like `limit -s 1048576`? On a trusted computer, other than having many run-away pr...
Linux allows
ulimit -s unlimited
, which allows programs to exhaust system memory and crash the computer. So, generally no good.
But what are the drawbacks to a significantly higher limit to the current 8192 default, like limit -s 1048576
? On a trusted computer, other than having many run-away programs/threads that combine together to exhaust memory, what's bad or undesirable that can happen?
Ana
(133 rep)
Oct 18, 2022, 10:01 PM
• Last activity: Oct 19, 2022, 04:49 AM
1
votes
1
answers
1026
views
How do I change the destination IP of all outgoing packets (especially DNS)?
I have a Raspberry Pi 4 with the latest build of (Debian) raspberry pi OS. I am trying to configure `iptables` to redirect all traffic coming from the Pi (with ) to another machine (lets say with an IP address ). This is to test the other machine which will host a DNS based captive portal and I want...
I have a Raspberry Pi 4 with the latest build of (Debian) raspberry pi OS. I am trying to configure
iptables
to redirect all traffic coming from the Pi (with ) to another machine (lets say with an IP address ). This is to test the other machine which will host a DNS based captive portal and I want to forward all traffic to that captive portal machine (IP B). If I could keep the SSH connection unforwarded that would be great because I like my headless setup.
I have already tried this set of rules on the NAT table (iptables
). I realise what I already tried only tries to forward UDP traffic from IP A to IP B. This didn't work.
root@pi4:/home/pi# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 7 packets, 1155 bytes)
pkts bytes target prot opt in out source destination
0 0 DNAT udp -- * * udp dpt:53 to:
Chain INPUT (policy ACCEPT 7 packets, 1155 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0
Any help or pointers are much appreciated.
Phippsy
(31 rep)
Jul 21, 2022, 10:01 AM
• Last activity: Jul 21, 2022, 08:10 PM
1
votes
1
answers
543
views
What purpose does ELF's stack-size metadata have?
I was reading the Rust Unstable Book, and I saw a new feature for [`emit-stack-sizes`](https://doc.rust-lang.org/beta/unstable-book/compiler-flags/emit-stack-sizes.html#emit-stack-sizes), > The rustc flag `-Z emit-stack-sizes` makes LLVM emit stack size metadata. It goes on to say > **NOTE: This LLV...
I was reading the Rust Unstable Book, and I saw a new feature for [
emit-stack-sizes
](https://doc.rust-lang.org/beta/unstable-book/compiler-flags/emit-stack-sizes.html#emit-stack-sizes) ,
> The rustc flag -Z emit-stack-sizes
makes LLVM emit stack size metadata.
It goes on to say
> **NOTE: This LLVM feature only supports the ELF object format as of LLVM 8.0.** Using this flag with targets that use other object formats (e.g. macOS and Windows) will result in it being ignored.
The LLVM Feature it seems to be using is the EmitStackSizeSection
option. What's the purpose of knowing the stack size? Does tooling use this? Is this an official feature of ELF and if so does the kernel make use of this? This seems to get recorded in the [ELF metadata under the sections .stack_sizes
, .rel.stack_sizes
, and .rela.stack_sizes
](https://github.com/japaric/stack-sizes/blob/6de849728bd6092b86f73025decd03d53e323414/src/lib.rs#L149)
Evan Carroll
(34663 rep)
May 28, 2022, 06:27 PM
• Last activity: Jun 1, 2022, 06:45 PM
1
votes
1
answers
314
views
How to get linux stack bounds?
How can I get the address bounds of the Linux stack using syscalls without resorting to using exception handlers? I can get the stack size using getrlimit, but it doesn't say where the stack starts nor ends. RSP is pointing somewhere within the stack, so that does not give me the ability to determin...
How can I get the address bounds of the Linux stack using syscalls without resorting to using exception handlers? I can get the stack size using getrlimit, but it doesn't say where the stack starts nor ends. RSP is pointing somewhere within the stack, so that does not give me the ability to determine how much has been used, or how much is available. I can use msync to find what has been committed, but not what areas have not been touched yet. In my assembly code, I want to include a check that I am not pushing so much on the stack that I might be running close to the limit.
Vikors
(11 rep)
Apr 8, 2022, 07:07 PM
• Last activity: Apr 8, 2022, 10:02 PM
2
votes
1
answers
1826
views
Why is Linux stack size limit so low on 64-bit machines?
As far as I know the Linux stack size limit is 8 MB, but on 64-bit machines there's no reason why it couldn't be massively increased, e.g. to 4 GB. This would allow programmers to mostly not worry about storing large values on the stack, or using recursive algorithms instead of iterative. It should...
As far as I know the Linux stack size limit is 8 MB, but on 64-bit machines there's no reason why it couldn't be massively increased, e.g. to 4 GB. This would allow programmers to mostly not worry about storing large values on the stack, or using recursive algorithms instead of iterative.
It should also be faster because stack allocation is much much faster than heap allocation. We could see a whole new class of stack allocated data structures - imagine
std::stack_vector
which allocates on the stack.
Is there a downside I'm not seeing? Or is it just that nobody has cared enough to make the change?
Timmmm
(675 rep)
Oct 12, 2021, 08:42 AM
• Last activity: Oct 12, 2021, 07:10 PM
4
votes
2
answers
1437
views
Why is the stack argument required for the clone wrapper?
I've been carefully reading the linux man page for clone(), and I understand the difference between the clone() wrapper and the "raw" system call. But what I don't understand is why the parent process needs to allocate a stack for the child, even if CLONE_VM is not used in the wrapper. Does the wrap...
I've been carefully reading the linux man page for clone(), and I understand the difference between the clone() wrapper and the "raw" system call. But what I don't understand is why the parent process needs to allocate a stack for the child, even if CLONE_VM is not used in the wrapper.
Does the wrapper simply ignore the stack argument if CLONE_VM is not used? Why require it at all then? The raw system call allows it to be null which makes sense, but I don't understand why the wrapper requires this. Will the wrapper make the child and parent share memory even if you don't tell it to?
exliontamer
(137 rep)
Apr 26, 2021, 04:14 AM
• Last activity: Apr 26, 2021, 06:43 AM
0
votes
2
answers
459
views
To get in variable, a directory've been pushd
How do I use, i.e. to have in bash variable, a directory just've been pushd, and not to do popd command ?
How do I use, i.e. to have in bash variable, a directory just've been pushd, and not to do popd command ?
user467462
Apr 19, 2021, 05:44 PM
• Last activity: Apr 19, 2021, 08:19 PM
1
votes
2
answers
134
views
Where are the files related to the stack on a Unix OS?
I am running Arch Linux, 64bit latest update on one of my computers. I am currently a Computer Science student and we had a test yesterday where we were to implement a dynamic stack using linked lists. I am now interested in learning how the stack in my computer is built, however I am unable to find...
I am running Arch Linux, 64bit latest update on one of my computers. I am currently a Computer Science student and we had a test yesterday where we were to implement a dynamic stack using linked lists. I am now interested in learning how the stack in my computer is built, however I am unable to find any "stack.c" with comments on my Arch Linux computer. Where is the stack programming located? I understand how the stack creates memory but I want to actually see the code and maybe play around with it myself.
linker
(153 rep)
Feb 4, 2021, 02:02 PM
• Last activity: Feb 17, 2021, 02:42 AM
Showing page 1 of 20 total questions