When `BPF(eBPF)` traces the call stack, all user-mode functions are `[unknown]`. Why is this?
0
votes
0
answers
39
views
Experimental environment
┌──[root@vms99.liruilongs.github.io]-[/usr/share/bcc/tools]
└─$hostnamectl
Static hostname: vms99.liruilongs.github.io
Icon name: computer-vm
Chassis: vm
Machine ID: ea70bf6266cb413c84266d4153276342
Boot ID: 0d01838b0095494c82d1befb174a317d
Virtualization: vmware
Operating System: Rocky Linux 8.9 (Green Obsidian)
CPE OS Name: cpe:/o:rocky:rocky:8:GA
Kernel: Linux 4.18.0-513.9.1.el8_9.x86_64
Architecture: x86-64
┌──[root@vms99.liruilongs.github.io]-[/usr/share/bcc/tools]
└─$
When using BPF/eBPF
to trace the call stack, I found that all user-mode functions are [unknown]
┌──[root@vms99.liruilongs.github.io]-[/usr/share/bcc/tools]
└─$profile
Sampling at 49 Hertz of all threads by user + kernel stack... Hit Ctrl-C to end.
^C
_raw_spin_unlock_irqrestore
_raw_spin_unlock_irqrestore
prepare_to_swait_event
rcu_gp_kthread
kthread
ret_from_fork
- rcu_sched (14)
1
kmem_cache_alloc_node
kmem_cache_alloc_node
__alloc_skb
__ip_append_data.isra.50
ip_append_data.part.51
ip_send_unicast_reply
tcp_v4_send_reset
tcp_v4_rcv
ip_protocol_deliver_rcu
ip_local_deliver_finish
ip_local_deliver
ip_rcv
__netif_receive_skb_core
process_backlog
__napi_poll
net_rx_action
__softirqentry_text_start
do_softirq_own_stack
do_softirq.part.16
__local_bh_enable_ip
ip_finish_output2
ip_output
__ip_queue_xmit
__tcp_transmit_skb
tcp_connect
tcp_v4_connect
__inet_stream_connect
inet_stream_connect
__sys_connect
__x64_sys_connect
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
- haproxy (1203)
1
show_vma_header_prefix
show_vma_header_prefix
show_map_vma
show_map
seq_read
vfs_read
ksys_read
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
[unknown]
- awk (39726)
1
.............
┌──[root@vms99.liruilongs.github.io]-[/usr/share/bcc/tools]
└─$
What is the reason for this? Is it because the program lacks debugging information? Or some other reason?
I used Python
to write a lock
demo, and this also happened
┌──[root@vms99.liruilongs.github.io]-[~]
└─$cat lock_demo.py
import threading
import time
lock = threading.Lock()
def worker(id):
print(f"Worker {id} started")
with lock:
print(f"Worker {id} acquired lock")
time.sleep(2) # 模拟长时间的计算或 I/O
print(f"Worker {id} released lock")
threads = []
for i in range(5):
t = threading.Thread(target=worker, args=(i,))
threads.append(t)
t.start()
for t in threads:
t.join()
print("All workers finished")
threadsnoop
┌──[root@vms99.liruilongs.github.io]-[~]
└─$threadsnoop
TIME(ms) PID COMM FUNC
0 51671 b'python3' b'[unknown]'
0 51671 b'python3' b'[unknown]'
0 51671 b'python3' b'[unknown]'
0 51671 b'python3' b'[unknown]'
0 51671 b'python3' b'[unknown]'
offcputime
┌──[root@vms99.liruilongs.github.io]-[~/FlameGraph]
└─$offcputime -p pgrep -f lock_demo.py
Tracing off-CPU time (us) of PID 51397 by user + kernel stack... Hit Ctrl-C to end.
^C
.......................
finish_task_switch
__sched_text_start
schedule
futex_wait_queue_me
futex_wait
do_futex
__x64_sys_futex
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
- python3 (51402)
157
finish_task_switch
__sched_text_start
schedule
futex_wait_queue_me
futex_wait
do_futex
__x64_sys_futex
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
- python3 (51400)
213
finish_task_switch
__sched_text_start
schedule
futex_wait_queue_me
futex_wait
do_futex
__x64_sys_futex
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
- python3 (51397)
267
finish_task_switch
__sched_text_start
schedule
do_nanosleep
hrtimer_nanosleep
common_nsleep_timens
__x64_sys_clock_nanosleep
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
[unknown]
- python3 (51400)
2002609
finish_task_switch
__sched_text_start
schedule
do_nanosleep
hrtimer_nanosleep
common_nsleep_timens
__x64_sys_clock_nanosleep
do_syscall_64
entry_SYSCALL_64_after_hwframe
[unknown]
[unknown]
- python3 (51402)
2003178
..................
┌──[root@vms99.liruilongs.github.io]-[~/FlameGraph]
└─$
Any help will be greatly appreciated, best wishes
Asked by 山河以无恙
(185 rep)
Oct 21, 2024, 01:58 AM
Last activity: Oct 22, 2024, 01:40 AM
Last activity: Oct 22, 2024, 01:40 AM