Page 2 of 3

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Tue Aug 28, 2012 5:59 am
by powerlogy
Qosmon restart command :

Code: Select all

+ tc qdisc show
+ grep hfsc
+ awk {print $5}
+ tc qdisc del dev eth0.2 root
+ tc qdisc del dev imq0 root
+ delete_chain_from_table mangle qos_egress
+ delete_chain_from_table mangle qos_ingress
+ set +x
+ tc qdisc add dev eth0.2 root handle 1:0 hfsc default 10
+ tc class add dev eth0.2 parent 1:0 classid 1:1 hfsc ls rate 1000Mbit ul rate 4        78kbit
+ set +x
+ tc class add dev eth0.2 parent 1:1 classid 1:2 hfsc ls m2 680Mbit
+ tc qdisc add dev eth0.2 parent 1:2 esfq limit 17 depth 128 divisor 8 hash ctna        tchg
+ tc filter add dev eth0.2 parent 1:0 prio 2 protocol ip handle 0x2 fw flowid 1:        2
+ set +x
+ tc class add dev eth0.2 parent 1:1 classid 1:3 hfsc ls m2 10Mbit
+ tc qdisc add dev eth0.2 parent 1:3 esfq limit 17 depth 128 divisor 8 hash ctna        tchg
+ tc filter add dev eth0.2 parent 1:0 prio 3 protocol ip handle 0x3 fw flowid 1:        3
+ set +x
+ tc class add dev eth0.2 parent 1:1 classid 1:4 hfsc ls m2 200Mbit
+ tc qdisc add dev eth0.2 parent 1:4 esfq limit 17 depth 128 divisor 8 hash ctna        tchg
+ tc filter add dev eth0.2 parent 1:0 prio 4 protocol ip handle 0x4 fw flowid 1:        4
+ set +x
+ tc class add dev eth0.2 parent 1:1 classid 1:5 hfsc ls m2 100Mbit rt m1 400kbi        t d 20ms m2 200kbit
+ tc qdisc add dev eth0.2 parent 1:5 esfq limit 17 depth 128 divisor 8 hash ctna        tchg
+ tc filter add dev eth0.2 parent 1:0 prio 5 protocol ip handle 0x5 fw flowid 1:        5
+ set +x
+ tc class add dev eth0.2 parent 1:1 classid 1:6 hfsc ls m2 10Mbit
+ tc qdisc add dev eth0.2 parent 1:6 esfq limit 17 depth 128 divisor 8 hash ctna        tchg
+ tc filter add dev eth0.2 parent 1:0 prio 6 protocol ip handle 0x6 fw flowid 1:        6
+ set +x
+ tc qdisc change dev eth0.2 root handle 1:0 hfsc default 4
+ iptables -t mangle -N qos_egress
+ iptables -t mangle -A POSTROUTING -o eth0.2 -j qos_egress
+ set +x
+ iptables -t mangle -I qos_egress -j MARK --set-mark 0x4
+ iptables -t mangle -I qos_egress -m mark ! --mark 0x0 -j RETURN
+ iptables -t mangle -I qos_egress -m mark ! --mark 0x0 -j CONNMARK --save-mark         --mask 0x007F
+ iptables -t mangle -A qos_egress -j CONNMARK --save-mark --mask 0x007F
+ set +x
+ tc class add dev imq0 parent 1:1 classid 1:2 hfsc ls m1 330Mbit d 20ms m2 330M        bit
+ tc qdisc add dev imq0 parent 1:2 esfq limit 57 depth 128 divisor 8 hash ctnatc        hg
+ tc filter add dev imq0 parent 1:0 prio 2 protocol ip handle 0x200 fw flowid 1:        2
+ set +x
+ tc class add dev imq0 parent 1:1 classid 1:3 hfsc ls m2 10Mbit
+ tc qdisc add dev imq0 parent 1:3 esfq limit 57 depth 128 divisor 8 hash ctnatc        hg
+ tc filter add dev imq0 parent 1:0 prio 3 protocol ip handle 0x300 fw flowid 1:        3
+ set +x
+ tc class add dev imq0 parent 1:1 classid 1:4 hfsc ls m1 540Mbit d 20ms m2 540M        bit
+ tc qdisc add dev imq0 parent 1:4 esfq limit 57 depth 128 divisor 8 hash ctnatc        hg
+ tc filter add dev imq0 parent 1:0 prio 4 protocol ip handle 0x400 fw flowid 1:        4
+ set +x
+ tc class add dev imq0 parent 1:1 classid 1:5 hfsc rt m1 400kbit d 20ms m2 200k        bit ls m1 110Mbit d 20ms m2 110Mbit
+ tc qdisc add dev imq0 parent 1:5 esfq limit 57 depth 128 divisor 8 hash ctnatc        hg
+ tc filter add dev imq0 parent 1:0 prio 5 protocol ip handle 0x500 fw flowid 1:        5
+ set +x
+ tc class add dev imq0 parent 1:1 classid 1:6 hfsc ls m2 10Mbit
+ tc qdisc add dev imq0 parent 1:6 esfq limit 57 depth 128 divisor 8 hash ctnatc        hg
+ tc filter add dev imq0 parent 1:0 prio 6 protocol ip handle 0x600 fw flowid 1:        6
+ set +x
+ tc qdisc change dev imq0 root handle 1:0 hfsc default 4
+ iptables -t mangle -N qos_ingress
+ iptables -t mangle -A FORWARD -i eth0.2 -j qos_ingress
+ iptables -t mangle -A INPUT -i eth0.2 -j qos_ingress
+ set +x
+ iptables -t mangle -I qos_ingress -j MARK --set-mark 0x400
+ iptables -t mangle -I qos_ingress -m mark ! --mark 0x0 -j RETURN
+ iptables -t mangle -I qos_ingress -m mark ! --mark 0x0 -j CONNMARK --save-mark         --mask 0x7F00
+ iptables -t mangle -I qos_ingress -j IMQ --todev 0
+ iptables -t mangle -A qos_ingress -j CONNMARK --save-mark --mask 0x7F00
+ set +x
+ [ -z  ]
+ gargoyle_header_footer -i gargoyle
+ sed -n s/.*currentWanGateway.*"\(.*\)".*/\1/p
+ ptarget_ip=94.54.144.1
+ iptables -t mangle -I qos_ingress -p icmp --icmp-type 0 -d 94.54.154.146 -s 94        .54.144.1 -j RETURN
+ tc class add dev eth0.2 parent 1:1 classid 1:127 hfsc rt umax 106 dmax 10ms ra        te 4kbit
+ tc qdisc add dev eth0.2 parent 1:127 pfifo
+ tc filter add dev eth0.2 parent 1:0 prio 1 protocol ip handle 127 fw flowid 1:        127
+ iptables -t mangle -I qos_egress -p icmp --icmp-type 8 -s 94.54.154.146 -d 94.        54.144.1 -j MARK --set-mark 127
+ [ -n  ]
+ pinglimit=36
+ qosmon -a -b 800 94.54.144.1 3078 36
+ set +x
Ping times while restart :

Code: Select all

PING 94.54.144.1 (94.54.144.1): 56 data bytes
64 bytes from 94.54.144.1: seq=0 ttl=255 time=8.462 ms
64 bytes from 94.54.144.1: seq=1 ttl=255 time=8.204 ms
64 bytes from 94.54.144.1: seq=2 ttl=255 time=7.000 ms
64 bytes from 94.54.144.1: seq=3 ttl=255 time=7.098 ms
64 bytes from 94.54.144.1: seq=4 ttl=255 time=8.486 ms
64 bytes from 94.54.144.1: seq=5 ttl=255 time=8.195 ms
64 bytes from 94.54.144.1: seq=6 ttl=255 time=9.607 ms
64 bytes from 94.54.144.1: seq=7 ttl=255 time=7.803 ms
64 bytes from 94.54.144.1: seq=8 ttl=255 time=9.649 ms
64 bytes from 94.54.144.1: seq=9 ttl=255 time=7.474 ms
64 bytes from 94.54.144.1: seq=10 ttl=255 time=7.570 ms
64 bytes from 94.54.144.1: seq=11 ttl=255 time=7.301 ms
64 bytes from 94.54.144.1: seq=12 ttl=255 time=7.299 ms
64 bytes from 94.54.144.1: seq=13 ttl=255 time=6.875 ms
64 bytes from 94.54.144.1: seq=14 ttl=255 time=6.784 ms
64 bytes from 94.54.144.1: seq=15 ttl=255 time=6.758 ms

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Tue Aug 28, 2012 5:02 pm
by pbix
Everything looks normal during that output. Did you look at the time shown on the ACC status section after this run? What status and ping limit are shown there?

Confusing,

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Wed Mar 27, 2013 10:29 pm
by powerlogy
Hello agai pbix, i'm bumping this thread.Almost a year gone when i first saw this strange qos anormaly.In time i learned too many thing about gargoyle and openwrt.And this problem still bugs me , i still have it.I recorded a video this time about process.

This is the video from qos page of gargoyle and several test pages.
http://youtu.be/retqoKHknIE

Here a quick describe about it.

while normal class with %54 percent_bandwidth value is under load then i started a connection which belong slow class with %1 percent_bandwidth value of bandwith.But when the link load get frustrated those percent_bandwidth values doesn't working on any condition.

However at these conditions, if a class with a min_RTT=yes value becomes active then percent_bandwidth values are working correctly for each class under load.

This problem accurs on Gargoyle 1.5.3 to last trunk builds.Thank you pbix.

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Thu Mar 28, 2013 8:02 am
by pbix
Well I watched your video but unfortunately you did not show the ACC section of the QoS download page or the QoS upload page. This is actually also what I requested of you in my last post.

Please add BOTH of those to your video and I will comment.

Also please use v1.5.9 for this test.

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Thu Mar 28, 2013 1:20 pm
by powerlogy
Hello again pbix, thanks for quick reply.I recorded qos upload, qos download, router load and connection list.

This one has minRTT=yes value on slow class.
http://youtu.be/d4jIvP29Ces

Thank you.

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Thu Mar 28, 2013 2:16 pm
by pbix
I watched your video and everything looks good unless I am missing something.

You need to post a video showing the problem you are asking about not a video of your router working perfectly.

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Thu Mar 28, 2013 3:44 pm
by powerlogy
pbix wrote:I watched your video and everything looks good unless I am missing something.

You need to post a video showing the problem you are asking about not a video of your router working perfectly.
Mate, problem is in the first video, look that bandwith sharing on two classes.That Slow class has %1 bandwith but qos-script is sharing equal bandwith with Normal class which has 54% bandwith under link load.

Second one got nothing wrong because minrtt is active.These problem doesn't accurs when minrtt is on.It's simple is that.

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Thu Apr 11, 2013 7:09 pm
by powerlogy
Hello, hello!

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Sat Apr 20, 2013 10:56 pm
by Domarius
This is one of many unresolved torrenting relating threads.

Isn't it obvious that connection limits are the problem yet?

Re: QoS Percent Bandwidth At Capacity not working right

Posted: Mon Apr 22, 2013 9:08 am
by powerlogy
Domarius wrote:This is one of many unresolved torrenting relating threads.

Isn't it obvious that connection limits are the problem yet?
Finally someone understands the whole thing.Torrent or not this happens on qos-gargoyle.