Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

7 votes
1 answers
1651 views
How much data transferred per user via SSH over time period
I have an Ubuntu server with approximately 20 users who primarily use it for SSH tunneling. I would like to know if there is any way to determine the amount of data transferred by each user over a specific time period, such as the past week or month.
I have an Ubuntu server with approximately 20 users who primarily use it for SSH tunneling. I would like to know if there is any way to determine the amount of data transferred by each user over a specific time period, such as the past week or month.
Javad Zamani (81 rep)
Jun 19, 2023, 03:39 PM • Last activity: Mar 24, 2025, 06:25 AM
0 votes
1 answers
296 views
Changing packet payload with tc
How can tc be used to match a particular payload of an ingress packet, e.g., if the first 32 bits of payload of an IP/UDP packet are equal to some constant `$c`, the value `$c` should be changed to `$d`? This should work in particular for variable length IP headers. It appears that the `u32` filter...
How can tc be used to match a particular payload of an ingress packet, e.g., if the first 32 bits of payload of an IP/UDP packet are equal to some constant $c, the value $c should be changed to $d? This should work in particular for variable length IP headers. It appears that the u32 filter should be able to perform the matching. Is the following attempt correct? I am not sure about the nexthdr part in particular. tc filter add dev protocol ip parent ffff: u32 match $c 0xffffffff at nexthdr+8 Now pedit can be used to change the packet but I don't see a way to write $d in the UDP payload of a packet with variable length IP header. Any help is appreciated.
qemvirt (13 rep)
Oct 19, 2023, 10:19 PM • Last activity: Apr 28, 2024, 02:30 PM
1 votes
1 answers
358 views
Traffic shaping ineffective on tun device
I am developing a tunnel application that will provide a low-latency, variable bandwidth link. This will be operating in a system that requires traffic prioritization. However, while traffic towards the tun device is clearly being queued by the kernel, it appears whatever qdisc I apply to the device...
I am developing a tunnel application that will provide a low-latency, variable bandwidth link. This will be operating in a system that requires traffic prioritization. However, while traffic towards the tun device is clearly being queued by the kernel, it appears whatever qdisc I apply to the device it has no additional effect, including the default pfifo_fast, i.e. what should be high priority traffic is not being handled separately from normal traffic. I have made a small test application to demonstrate the problem. It creates two tun devices and has two threads each with a loop passing packets from one interface to the other and back, respectively. Between receiving and sending the loop delays 1us for every byte, roughly emulating an 8Mbps bidirectional link:
void forward_traffic(int src_fd, int dest_fd) {
    char buf[BUFSIZE];
    ssize_t nbytes = 0;
    
    while (nbytes >= 0) {
        nbytes = read(src_fd, buf, sizeof(buf));

        if (nbytes >= 0) {
            usleep(nbytes);
            nbytes = write(dest_fd, buf, nbytes);
        }
    }
    perror("Read/write TUN device");
    exit(EXIT_FAILURE);
}
With each tun interface placed in its own namespace, I can run iperf3 and get about 8Mbps of throughput. The default txqlen reported by ip link is 500 packets and when I run an iperf3 (-P 20) and a ping at the same time I see a RTTs from about 670-770ms, roughly corresponding to 500 x 1500 bytes of queue. Indeed, changing txqlen changes the latency proportionally. So far so good. With the default pfifo_fast qdisc I would expect a ping with the right ToS mark to skip that normal queue and give me a low latency, e.g ping -Q 0x10 I think should have much lower RTT, but doesn't (I have tried other ToS/DSCP values as well - they all have the same ~700ms RTT. Additionally I have tried various other qdiscs with the same results, e.g. fq_codel doesn't have a significant effect on latency. Regardless of the qdisc, tc -s qdisc always shows a backlog of 0 regardless of whether the link is congested. (But I do see ip -s link show dropped packets under congestion) Am I fundamentally misunderstanding something here or there something else I need to do make the qdisc effective? Complete source here
sheddenizen (111 rep)
Dec 2, 2023, 06:05 PM • Last activity: Dec 27, 2023, 03:42 PM
0 votes
0 answers
63 views
Can no longer ping containers after setting TBF qdisc on Docker0
I am trying to use the `tc` command to manipulate traffic on the docker0 interface. I run the commands ``` tc qdisc del dev docker0 root tc qdisc add dev docker0 root handle 1: tbf rate 100mbps burst 1600 limit 1 ``` I believe this is what it does: - `tbf`: Specifies the TBF qdisc to be used. - `rat...
I am trying to use the tc command to manipulate traffic on the docker0 interface. I run the commands
tc qdisc del dev docker0 root
tc qdisc add dev docker0 root handle 1: tbf rate 100mbps burst 1600 limit 1
I believe this is what it does: - tbf: Specifies the TBF qdisc to be used. - rate 100mbps: Sets the maximum bandwidth rate to 100 Mbps for the docker0 interface. - burst 1600: Sets the maximum amount of data that can be transmitted in a single burst to 1600 bytes. - limit 1: Limits the token bucket size to 1 token, which limits the amount of data that can be sent at any given time to the burst size. However, after setting this rule, I can no longer ping containers that are already running and attached to the default docker0 interface. I can also no longer build images that contain commands such as RUN apt-get update -y. Why is this the case. Can this qdisc configuration not be used alone?
akastack (73 rep)
May 16, 2023, 01:32 AM
0 votes
0 answers
1410 views
Is it possible to match multiple IP addresses when using tc filter... match?
I would like to `match` 4 IP addresses as `src` and other 4 IP addresses as `dst` when using `tc filter` I do know I could use subnets in `match` but unfortunately my addresses does not form a subnet instead I have distinct IP addresses. I have a working script with 1 IP address as `src` and 1 IP ad...
I would like to match 4 IP addresses as src and other 4 IP addresses as dst when using tc filter I do know I could use subnets in match but unfortunately my addresses does not form a subnet instead I have distinct IP addresses. I have a working script with 1 IP address as src and 1 IP address as dst export IF=enp0s8 export IP1=10.1.2.11 export IP2=10.1.2.15 tc qdisc del dev $IF root tc qdisc add dev $IF root handle 1:0 htb tc class add dev $IF parent 1:0 classid 1:1 htb rate 20mbit tc filter add dev $IF protocol ip parent 1:0 prio 1 u32 match ip dst $IP1/32 match ip src $IP2/32 flowid 1:1 tc filter add dev $IF protocol ip parent 1:0 prio 1 u32 match ip dst $IP2/32 match ip src $IP1/32 flowid 1:1 Because I have 4 src and 4 dst IP addresses I can accomplish the task by adding total of 32 lines of tc filter... but I am not sure there is not more efficient way. I've tried to google for match syntax with no success. As a guesswork here is what I've tried with no success: export IPGROUP1=10.1.2.11, 10.1.2.12, 10.1.2.13, 10.1.2.14 export IPGROUP2=10.1.2.15, 10.1.2.16, 10.1.2.17, 10.1.2.18 tc filter add dev $IF protocol ip parent 1:0 prio 1 u32 match ip dst $IPGROUP1 match ip src $IPGROUP2 flowid 1:1 tc filter add dev $IF protocol ip parent 1:0 prio 1 u32 match ip dst $IPGROUP2 match ip src $IPGROUP1 flowid 1:1
stuckinthekernel (1 rep)
Feb 4, 2023, 03:35 PM • Last activity: Feb 5, 2023, 05:34 AM
4 votes
1 answers
1907 views
Has 10 Gbps through Linux tc qdiscs ever been solved?
I'm trying to use `tc` to shape traffic on a system with 10 Gbps NICs and I find that I can't get anywhere near 10 Gbps through any qdisc. When I do: ``` tc qdisc add dev $ifc root handle 1: htb default ffff tc class add dev $ifc parent 1:0 classid 1:1 htb rate 32Gbit tc class add dev $ifc parent 1:...
I'm trying to use tc to shape traffic on a system with 10 Gbps NICs and I find that I can't get anywhere near 10 Gbps through any qdisc. When I do:
tc qdisc add dev $ifc root handle 1: htb default ffff
tc class add dev $ifc parent 1:0 classid 1:1 htb rate 32Gbit
tc class add dev $ifc parent 1:1 classid 1:ffff htb rate 1Gbit ceil 16Gbit burst 1G cburst 1G
My throughput gets capped around 3 Gbps. I've tried variations with CBQ and HFSC. No matter what I do I can't seem to get around that. Adding just the qdisc does *not* cause the problem (as I previously said). I've spent days reading everything I can find that mentions tc and qdisc and "10G". There seems to be a lot of mailing list activity 6-10 years ago (perhaps on the cusp as 10G became common, taking over from 1G) but no resolution. Am I missing something? Is it impossible to shape multiple gigabits/second on Linux?
Chris Nelson (231 rep)
Jun 3, 2022, 07:10 PM • Last activity: Sep 14, 2022, 02:17 PM
0 votes
0 answers
76 views
Transparently Rate Limiting ANY Connection that is Already In-Progress
Let's say I begin a large download, and (about an hour into it) I find that it is consuming too much of my overall bandwidth. Is there any way to rate limit that particular connection "after the fact"? I've seen tools that will allow you to control a download's speed if you begin the download using...
Let's say I begin a large download, and (about an hour into it) I find that it is consuming too much of my overall bandwidth. Is there any way to rate limit that particular connection "after the fact"? I've seen tools that will allow you to control a download's speed if you begin the download using those tools, but I'm interesting a way to rate limit a download of ANY connection (even if it is already in progress). How can this be done?
Lonnie Best (5415 rep)
Jun 18, 2022, 09:55 PM • Last activity: Jun 19, 2022, 08:41 PM
2 votes
0 answers
348 views
Add extra latency on top of existing tc qdiscs
On a system with an existing multi-stage qdisc setup, we need to introduce extra latency (at least fixed, but fixed with a small variation would be a nice option to have). The canonical way to do this on Linux is to use the `netem` qdisc. However, this cannot work here because `netem` does not work...
On a system with an existing multi-stage qdisc setup, we need to introduce extra latency (at least fixed, but fixed with a small variation would be a nice option to have). The canonical way to do this on Linux is to use the netem qdisc. However, this cannot work here because netem does not work with other qdiscs (this is a [well-documented](https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/#the-netem-qdisc-does-not-work-in-conjunction-with-other-qdiscs) limitation, and one which a coworker has verified himself). Putting a VM onto the machine that does nothing but netem not only seems like overkill but will also massively complicate routing and WLAN configuration so that’s a step I would prefer not to take. In case it’s relevant, the current setup is a combination of htb (used to limit bandwidth only… we probably should have used tbf instead but this is what we had when I joined) and fq_codel for ECN marking, both with custom patches. I’m not averse to patching this into either… As requested, here is a sample setup, using stock htb/fq_codel for easier testing:
#!/bin/mksh
set -ex
dev=eth0
rate=1000
sudo tc qdisc add dev $dev root handle 1: htb default 1
sudo tc class add dev $dev parent 1: classid 1:1 htb rate ${rate}kbit ceil ${rate}kbit prio 1
sudo tc qdisc add dev $dev parent 1:1 handle 2: fq_codel
mirabilos (1796 rep)
Mar 2, 2022, 05:57 PM • Last activity: Mar 2, 2022, 10:56 PM
1 votes
1 answers
3544 views
How to delay traffic and limit bandwidth at the same time with tc (Traffic Control)?
I want to throttle bandwidth and add delay to a network interface to simulate satellite communication. For example 800ms delay and 1mb/s. The following limits the bandwidth correctly but does not increase the latency: 17:16:51 root@Panasonic_FZ-55 ~ # tc qdisc add dev eth0 root tbf rate 1024kbit lat...
I want to throttle bandwidth and add delay to a network interface to simulate satellite communication. For example 800ms delay and 1mb/s. The following limits the bandwidth correctly but does not increase the latency: 17:16:51 root@Panasonic_FZ-55 ~ # tc qdisc add dev eth0 root tbf rate 1024kbit latency 800ms burst 1540 17:18:48 root@Panasonic_FZ-55 ~ # ping 10.10.91.58 PING 10.10.91.58 (10.10.91.58): 56 data bytes 64 bytes from 10.10.91.58: seq=0 ttl=64 time=0.938 ms 64 bytes from 10.10.91.58: seq=1 ttl=64 time=3.258 ms 64 bytes from 10.10.91.58: seq=2 ttl=64 time=1.259 ms 64 bytes from 10.10.91.58: seq=3 ttl=64 time=1.407 ms ^C --- 10.10.91.58 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.938/1.715/3.258 ms 17:18:56 root@Panasonic_FZ-55 ~ # iperf -c 10.10.91.58 ------------------------------------------------------------ Client connecting to 10.10.91.58, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.91.57 port 34790 connected with 10.10.91.58 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.5 sec 1.38 MBytes 1.09 Mbits/sec 17:19:19 root@Panasonic_FZ-55 ~ # I got my information from this site.
Matebo (29 rep)
Oct 7, 2021, 03:22 PM • Last activity: Oct 8, 2021, 11:31 AM
0 votes
0 answers
557 views
tc traffic shaping with HTB and CQB causes packet transmission gap inconsistencies
I am sorry if this is duplicate of https://serverfault.com/q/1076769/822163. I created that first and then realized the Linux and Unix stack exchange is the right place. Problem: When the tc HTB or CQB is used to do traffic shaping, first two packet that are sent after some time gap are sent back to...
I am sorry if this is duplicate of https://serverfault.com/q/1076769/822163 . I created that first and then realized the Linux and Unix stack exchange is the right place. Problem: When the tc HTB or CQB is used to do traffic shaping, first two packet that are sent after some time gap are sent back to back as recorded in the pcap log. I have a intermediate computer with ubuntu 18.4 with network forwarding enabled. I run the tc with HTB to shape the traffic to have constant bitrate output on egress port. Client requests the chunks with variable sizes from the sever. Server sends the chunk transfer encoded data with gap of 200ms between each chunk to client. With my setup having the intermediate computer, These packets are passed through the traffic shaper on intermediate computer to obtain fixed bitrate of 500kbps. As I disable offload (TSO and GRO) each n bytes are split into frames by pcap. Most of 1448B packets have time gap close to 24.224ms which is expected at 500kbps Issue: Although the frames arrive in the sequence, their time gap of arrival is not consistent. Second large packet (1448B) after gap of 200ms always comes almost back to back with first packet. Last packet in chunk ( 654B) arrives with delay (24.224ms instead of 10.464ms in example in the picture attached) Screen shot of the timings Timing gaps between the packets. TC command with HTB: tc qdisc del dev eno1 root 2> /dev/null > /dev/null tc qdisc add dev eno1 root handle 1:0 htb default 2 tc class add dev eno1 parent 1:1 classid 1:2 htb rate 500kbit ceil 500kbit burst 10 cburst 10 prio 2 tc filter add dev eno1 protocol ip parent 1:0 u32 match ip dst 192.168.2.103 flowid 1:2 If I am not doing any mistake in calculation I think the issue could be due to the token handling in tc that I am using for trafic shaping. I think the tokens are accumulated in the idle time and when the next packet is received it sends the two packets back to back. from third packet token consumtion rate settles down. If this is what is happenning, I would like to know if there is a way to avoid using the accumulated tokens for second packet in the chunk. I tried various options in tc command I also tried using CQB - command below Reference : https://lartc.org/lartc.html#AEN2233 Observation: reducing the burst = 10 slightly increases the gap between first and second packet. tc With CQB: tc qdisc del dev eno1 root 2> /dev/null > /dev/null tc qdisc add dev eno1 root handle 1: cbq avpkt 5000 bandwidth 10mbit tc class add dev eno1 parent 1: classid 1:1 cbq rate 500kbit allot 5000 prio 5 bounded isolated tc class add dev eno1 parent 1:1 classid 1:10 cbq rate 500kbit allot 5000 prio 1 avpkt 5000 bounded tc class add dev eno1 parent 1:1 classid 1:20 cbq rate 500kbit allot 5000 avpkt 5000 prio 2 tc filter add dev eno1 protocol ip parent 1:0 u32 match ip dst 192.168.2.103 flowid 1:10 tc filter add dev eno1 parent 1: protocol ip prio 13 u32 match ip dst 0.0.0.0/0 flowid 1:20 Further: As per suggestion from http://linux-ip.net/articles/hfsc.en/ I tried HFSC (referred ). I need help with HFSC here. Here is the script that I used tc qdisc del dev eno1 root 2> /dev/null > /dev/null tc qdisc add dev eno1 root handle 1: hfsc tc class add dev eno1 parent 1: classid 1:1 hfsc sc rate 1000kbit ul rate 1000kbit tc class add dev eno1 parent 1:1 classid 1:10 hfsc sc rate 1000kbit ul rate 1000kbit tc class add dev eno1 parent 1:1 classid 1:20 hfsc sc rate 10000kbit ul rate 10000kbit tc class add dev eno1 parent 1:10 classid 1:11 hfsc sc umax 1480b dmax 53ms rate 400kbit ul rate 1000kbit tc class add dev eno1 parent 1:10 classid 1:12 hfsc sc umax 1480b dmax 30ms rate 100kbit ul rate 1000kbit tc filter add dev eno1 protocol ip parent 1:0 u32 match ip dst 192.168.2.103 flowid 1:11 output of my tc class show eno1 Output: class hfsc 1:11 parent 1:10 sc m1 0bit d 23.4ms m2 400Kbit ul m1 0bit d 0us m2 1Mbit class hfsc 1: root class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 1Mbit ul m1 0bit d 0us m2 1Mbit class hfsc 1:10 parent 1:1 sc m1 0bit d 0us m2 1Mbit ul m1 0bit d 0us m2 1Mbit class hfsc 1:20 parent 1:1 sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit class hfsc 1:12 parent 1:10 sc m1 394672bit d 30.0ms m2 100Kbit ul m1 0bit d 0us m2 1Mbit I am not sure what does it mean by > ul m1 0bit d 0us where as In my tc command I have > sc umax 1480b dmax 53ms After running this script I try to ping 192.168.1.102. I get few ping responses and then the ARP > who has 192.168.2.100 kicks in where 192.168.2.100 is ip address of ip forwarding port where I am running tc. The command is mostly copied from http://linux-ip.net/articles/hfsc.en/ I have just added route > tc filter add dev eno1 protocol ip parent 1:0 u32 match ip dst 192.168.2.103 flowid 1:11 It would be great if someone could help to fix the umax and dmax issue.
Chinmaey Shende (1 rep)
Sep 6, 2021, 05:38 PM • Last activity: Sep 8, 2021, 03:56 PM
0 votes
0 answers
2690 views
Linux tc filter add - errors RTNETLINK answers: Operation not supported
I'm working on the network simulation to create some traffic on specific ports , and trying to inject the network delay. I'm using the linux tc utility to do this operations. So i'm new to the tc command , help will be much appreciated. I tried to add filter using the tc filter command and it gave t...
I'm working on the network simulation to create some traffic on specific ports , and trying to inject the network delay. I'm using the linux tc utility to do this operations. So i'm new to the tc command , help will be much appreciated. I tried to add filter using the tc filter command and it gave the "RTNETLINK answers: Operation not supported We have an error talking to the kernel" Below are the commands i ran in particular order, 1. sudo /sbin/tc qdisc show qdisc mq 0: dev eth0 root qdisc pfifo_fast 0: dev eth0 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 2. Then I added the delay using the below command, sudo /sbin/tc qdisc add dev eth0 parent :1 netem delay 100ms 3. Now i can see and list the delay sudo /sbin/tc qdisc show qdisc mq 0: dev eth0 root qdisc netem 8006: dev eth0 parent :1 limit 1000 delay 100.0ms 4. Next i ran the below command to add the filter to the port 7000 sudo /sbin/tc filter add dev eth0 parent :1 protocol ip u32 match ip sport 7000 0xffff flowid 1:2 ` RTNETLINK answers: Operation not supported We have an error talking to the kernel` 1. Can you please help me in how to add the filter to match the port? 2. Is there a way not to match , like is there a way to exclude the specfic ports in the filter command Thanks
karthik (1 rep)
Dec 11, 2020, 03:59 AM
1 votes
1 answers
1172 views
Using tc traffic shaping to filter by ethtype
I am trying to make a qdisc that filters out traffic based on its eth type and drops the specified traffic. However my current filter is not working and is not catching any traffic. '# tc filter add dev eth2 prio 100 protocol all parent 1: u32 match u16 (eth type) (mask) at 12 flowid 1:2' So my ques...
I am trying to make a qdisc that filters out traffic based on its eth type and drops the specified traffic. However my current filter is not working and is not catching any traffic. '# tc filter add dev eth2 prio 100 protocol all parent 1: u32 match u16 (eth type) (mask) at 12 flowid 1:2' So my question is, how do I change my filter so that it picks up traffic based on eth type?
Bruhfus (13 rep)
Oct 7, 2020, 09:57 PM • Last activity: Oct 7, 2020, 11:12 PM
0 votes
0 answers
183 views
Shape traffic to certain speeds for a specific process
I need to shape traffic for a specific PID to certain speeds. For example, I would like to limit upload for PID 7502 to 300Kb/s, and download for PID 7502 to 400Kb/s. How can this be done? I have researched: - iptables' limit module - only limits by number of packets - `tc` - found unreliable and do...
I need to shape traffic for a specific PID to certain speeds. For example, I would like to limit upload for PID 7502 to 300Kb/s, and download for PID 7502 to 400Kb/s. How can this be done? I have researched: - iptables' limit module - only limits by number of packets - tc - found unreliable and doesn't run on Alpine - GNU trickle - requires target application to be integrated with RPC, which my process does not support Are there any reasonable alternatives to these?
vpseg (13 rep)
Aug 31, 2020, 10:50 PM
1 votes
1 answers
430 views
How does a htb qdesc tree handle bandwidth overallocation?
let's say I have a simple htb hierarchy (See [`man 8 tc-htb`](https://linux.die.net/man/8/tc-htb)) set up where the total bandwidth specified for child htb classes *exceeds* the total bandwidth specified for the root htb class: ``` tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps tc clas...
let's say I have a simple htb hierarchy (See [man 8 tc-htb](https://linux.die.net/man/8/tc-htb)) set up where the total bandwidth specified for child htb classes *exceeds* the total bandwidth specified for the root htb class:
tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 70kbps
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 70kbps
tc class add dev eth0 parent 1:1 classid 1:12 htb rate 70kbps
Here the maximum for the root htb class is 100kbps, but the collective maximum for child htb classes is 160kbps. How would the kernel handle all three children generating traffic at their maximum rates? Can I use an intermediary sfq to ensure fair treatment of aggregate traffic in this case? More importantly, how does the kernel decide which traffic to let through if the total traffic being generated exceeds the bandwidth of the hardware interface?
Tenders McChiken (1319 rep)
Aug 10, 2020, 08:00 AM • Last activity: Aug 12, 2020, 09:20 AM
3 votes
1 answers
444 views
Traffic shaping using tc-netem on macvlan
I am setting up a virtual network using macvlans and I have connected traffic-control tc to each of them. I set the delay for each as 90ms. But on ping I get the time of 0.02 seconds. Why is tc not working on macvlan? I am using the following commands: tc qdisc add dev m1 root netem delay 90ms tc qd...
I am setting up a virtual network using macvlans and I have connected traffic-control tc to each of them. I set the delay for each as 90ms. But on ping I get the time of 0.02 seconds. Why is tc not working on macvlan? I am using the following commands: tc qdisc add dev m1 root netem delay 90ms tc qdisc add dev m2 root netem delay 90ms and then ping from ip of m1 to ip of m2. m1 and m2 are macvlans.
shaifali Gupta (141 rep)
Apr 9, 2020, 11:46 AM • Last activity: Apr 10, 2020, 03:10 PM
2 votes
0 answers
190 views
ipfw dummynet per ip fair traffic shaping
I want to shape traffic in such way, that no specific user could exhaust WAN connection that much so other users would be affected. I have ISP link with 100Mbits/s bandwidth and sometimes some users can exhaust it when they download something from internet or via vpn from smb in remote office. So, I...
I want to shape traffic in such way, that no specific user could exhaust WAN connection that much so other users would be affected. I have ISP link with 100Mbits/s bandwidth and sometimes some users can exhaust it when they download something from internet or via vpn from smb in remote office. So, I came out with following rules: #traffic shaping for office users #$ipfw pipe 1 config bw 100Mbits/s #$ipfw queue 1 config pipe 1 weight 2 mask dst-ip 0xffffffff #traffic from internet to local subnets (wi-fi and wire) going to the queue #$ipfw add queue 1 ip from any to { 172.30.0.0/24 or 172.30.1.0/24 } in recv em0 # the same thing but with lower weight for guest wi-fi #$ipfw add queue 1 ip from any to 192.168.0.0/23 in recv em0 #traffic from remote office via vpn going to the queue #$ipfw add queue 1 ip from any to any in recv tun101 #$ipfw add queue 1 ip from any to any in recv tun100 So, the thing is, that I don't exactly understand following: 1. What is the difference between 0xffffffff and 0x00000000 and what should be used for per ip fair shaping ? 2. What is the size of queue should be sеt for pipe, because as I understand this is quite important? 3. Maybe there is a more simple way to achieve this task?
Никита (21 rep)
Dec 17, 2019, 10:18 AM
3 votes
3 answers
2992 views
queueing in linux-htb
I am trying to understand the queuing mechanism of linux-htb QDisc and QDisc of linux tc in general. What I could gather: During TX, the packet is queued into the queue inside the linux tc. This queue by default follows a pfifo_fast QDisc with a txqueuelen of 1000. The packet scheduler dequeues the...
I am trying to understand the queuing mechanism of linux-htb QDisc and QDisc of linux tc in general. What I could gather: During TX, the packet is queued into the queue inside the linux tc. This queue by default follows a pfifo_fast QDisc with a txqueuelen of 1000. The packet scheduler dequeues the packet from this queue and puts it onto the TX driver queue (ring buffer). When we use linux-htb, the txqueuelen is inherited only for the default queue. [Link ]. My question: Consider the tree (rates are specified in kbits/sec in parenthesis ()): 1: root qdisc (class htb) (100) / | \ / | \ / | \ 1:1 1:2 1:3 parent qdiscs (class htb) (30) (10) (60) 1. Are there internal queues maintained for each of the parent htb classes (1:1, 1:2 and 1:3)? If yes what is their queue length? If not, how many queues are actually maintained and for what purpose? What is their queue length? 2. What exactly is meant by Queueing Discipline (QDisc)? Is it the property of the data structure used (queue)? or it is a property of the packet scheduler? Or maybe both combined? 3. While reading the source code of htb QDisc [Link ], I came accross something called a direct queue. What is a direct_queue? Provide link to relevant sources if possible.
sbhTWR (103 rep)
Feb 28, 2019, 02:36 PM • Last activity: Nov 4, 2019, 06:50 PM
1 votes
1 answers
434 views
Does the Linux traffic control utility modify datagrams, IP packets, or frames?
I am using the Linux's traffic control (`tc`) utility, which to my understanding is used to configure the Linux kernel packet scheduler. I am also using the `netem` command in `tc` to add delay, drop, or corrupt traffic. My main question is, does the `netem` modify transport layer datagrams, IP pack...
I am using the Linux's traffic control (tc) utility, which to my understanding is used to configure the Linux kernel packet scheduler. I am also using the netem command in tc to add delay, drop, or corrupt traffic. My main question is, does the netem modify transport layer datagrams, IP packets, or Link layer frames (like Ethernet)? I found [this page](https://wiki.linuxfoundation.org/networking/kernel_flow) which explains the network communication flow in the Linux kernel. It mentions that the shaping and queuing disciplines are made in the "Layer 2: Link layer (e.g. Ethernet)". Does this mean that netem adds its corruption, loss, or delay on frames (layer 2)? But since tc filter allows you to apply traffic rules to a certain IP:port pair, does that mean it operates on the transport layer datagrams (layer 4)?
Khalid (113 rep)
Sep 6, 2019, 11:46 AM • Last activity: Sep 11, 2019, 12:17 PM
4 votes
0 answers
2000 views
Is there a way to tag any specific application's traffic with DSCP/ToS
I recently made the switch from windows to linux (Manjaro). To manage traffic I had been using a windows feature that allowed me to specify the name of an application so that its network traffic would be tagged with a specific code (DSCP), my router(pfsense) would then check it and prioritize the tr...
I recently made the switch from windows to linux (Manjaro). To manage traffic I had been using a windows feature that allowed me to specify the name of an application so that its network traffic would be tagged with a specific code (DSCP), my router(pfsense) would then check it and prioritize the traffic accordingly. It's set up with these levels of priority : 1. online games 2. all unclassified traffic (mostly web traffic) 3. steam/origin/windows updates 4. torrents This made it so that me and my brother could be playing online games, with torrents going, and my parents could open a youtube video at any time and the torrents/updates would be throttled down automatically by the router, all throughout we would at worst get 5ms jitter and an extra 10 to 20 ping. When I was thinking about switching to linux it didn't occur to me that replicating this configuration would be a problem, I was expecting this to be native functionality to iptables or some other linux firewall, but as it turns out while the functionality did exist around 2002/2003, it was dropped for being broken and and deemed too much trouble to fix. iptables does allow you to mark traffic by based on the process pid, but this isn't great for me since i need to tag the first packet that the application sends out, due to how pfsense classifies traffic for prioritization. So, over the course of a few weeks, in in trying different search terms before committing to a solution, i have progressed through this options: SElinux/Aparmor - these do far more than I want/need them to systemtap - (kernel debugging tool) i was getting ready to fumble my way into getting a script to make iptables rules while reading pid/process name from some kind of live kernel patch (not ideal) anfd/lpfw - these are firewalls that block everything by default and allow you to setup rules to allow traffic based on the application command name I'm posting this in the hopes that someone has this figured out because if not I will have to start modifying lpfw or andf to suit my needs. TL;DR I want to make it so specific applications get their traffic tagged(ToS/DSCP) based on their command name, so that my router can prioritize them appropriately. Any info on how to replicate this functionality in linux is appreciated.
Depak (41 rep)
Aug 10, 2019, 02:41 PM • Last activity: Aug 12, 2019, 05:06 AM
3 votes
0 answers
47 views
How to shape bandwidth fairly between processes?
While running QBitorrent, other processes get virtually no bandwidth, as QBittorrent gobbles up all of it. Chrome loads forever, pings time out etc. I could just limit QBittorrent to not use all of the bandwidth, but for that I'd have to know the maximum on the current connection, but that depends o...
While running QBitorrent, other processes get virtually no bandwidth, as QBittorrent gobbles up all of it. Chrome loads forever, pings time out etc. I could just limit QBittorrent to not use all of the bandwidth, but for that I'd have to know the maximum on the current connection, but that depends on the network I'm currently connected to which changes frequently. This is not a very elegant solution. I use Debian 9, how can I configure it to automatically shape the available bandwidth so that each process gets it's "fair" share? I understand "fair" is very vague, but what I have in mind is that if 4 processes try to download/upload at the same time, the bandwidth should be divided roughly equally between them.
PaperTsar (191 rep)
Jun 21, 2019, 10:07 AM
Showing page 1 of 20 total questions