Optimizing QoS for speed

Report issues relating to bandwith monitoring, bandwidth quotas or QoS in this forum.

Moderator: Moderators

Waterspuwer
Posts: 32
Joined: Mon Nov 12, 2018 6:04 am

Optimizing QoS for speed

Postby Waterspuwer » Sun Apr 14, 2019 8:52 am

After years of no upgrades my ISP finally decided to upgrade my connection from 40 to 50 mbit, yey! However, I'm now struggling to reach it with QoS enabled. So far I've disabled ACC and set the maximum speed to 60000 kbit. I now get ~46 mbit instead of ~52 mbit without QoS. With ACC enabled it's maxxing out at 42 mbit (which I previously had but that was also ISP limit so never noticed). OK, just 6 mbit difference but that's 60% of my speed increase :lol:

I have an Archer C7v4 running 1.11.X (Built 20190405-0155 git@4685bd7f). CPU load is still acceptable at 0.28 / 0.15 / 0.04 (1/5/15 minutes). Are there any other settings I can change to improve throughput?

Ideally I'd like only *some* clients to be limited, the rest can go through unfiltered, but I don't think that's possible?

Waterspuwer
Posts: 32
Joined: Mon Nov 12, 2018 6:04 am

Re: Optimizing QoS for speed

Postby Waterspuwer » Sun May 12, 2019 9:20 am

I've been doing some experimenting today with the QoS parameters. At the moment the tc divisor used is 256. When I increase this to 512 (for both up and down), lo and behold, the speed is almost identical as without QoS (just 1-2 mbit difference, which is OK). No noticable impact on CPU load. I'll see if this speed keeps working like this, but I hope this helps someone else :)

SisterFister
Posts: 5
Joined: Fri Apr 08, 2016 5:48 pm

Re: Optimizing QoS for speed

Postby SisterFister » Sun May 12, 2019 10:36 pm

One tip is disabling some processes like bwmon. I've done that, and even went as far as turning off the web server to save some CPU usage. Unfortunately, I still hit the CPU problem with Gargoyle well before that, and it causes the ACC to sort of cut everything to half speed. I'm on a much more generous connection though.

In OpenWRT git, the simplest form of QoS (tbf+fq_codel apparently), plus removing the MTU fix in firewall, plus flow offloading gives me the chance to nearly fully use my 150 meg down, 20 up connection on a F9k1115v2 with barely any CPU time left. It fluxes between 96% and 100% CPU time during saturation.

I'll build on latest and apply some of the tweaks I have used, and look into the init script for something my nooby behind can tweak to maybe further reduce some CPU hit.

In short, if you don't do any PPP/tunneling, I would suggest removing "option mtu_fix" in /etc/config/firewall. Furthermore, disabling bwmon and webmon will get you even more speed out of your connection.

Waterspuwer
Posts: 32
Joined: Mon Nov 12, 2018 6:04 am

Re: Optimizing QoS for speed

Postby Waterspuwer » Tue May 14, 2019 6:36 am

SisterFister wrote:One tip is disabling some processes like bwmon. I've done that, and even went as far as turning off the web server to save some CPU usage. Unfortunately, I still hit the CPU problem with Gargoyle well before that, and it causes the ACC to sort of cut everything to half speed. I'm on a much more generous connection though.

In OpenWRT git, the simplest form of QoS (tbf+fq_codel apparently), plus removing the MTU fix in firewall, plus flow offloading gives me the chance to nearly fully use my 150 meg down, 20 up connection on a F9k1115v2 with barely any CPU time left. It fluxes between 96% and 100% CPU time during saturation.

I'll build on latest and apply some of the tweaks I have used, and look into the init script for something my nooby behind can tweak to maybe further reduce some CPU hit.

In short, if you don't do any PPP/tunneling, I would suggest removing "option mtu_fix" in /etc/config/firewall. Furthermore, disabling bwmon and webmon will get you even more speed out of your connection.

Thanks, I've already disabled web usage monitor (don't need it). Currently running for 2 days now with the tc divisor on 512 and speed is still good. Perhaps it would make sense to increase this by default for Gargoyle? Why was it set so low? I think nix normally uses 1024 as default. The CPU load is not an issue on my router (Archer C7 v4), even with max downloading it's not more than 0.2-0.3.

pbix
Developer
Posts: 1365
Joined: Fri Aug 21, 2009 5:09 pm

Re: Optimizing QoS for speed

Postby pbix » Wed May 29, 2019 7:23 am

Waterspuwer wrote: At the moment the tc divisor used is 256. When I increase this to 512 (for both up and down), lo and behold, the speed is almost identical as without QoS (just 1-2 mbit difference, which is OK).


I made a test of this and cannot find any measurable difference in speed caused by changing the tc divisor. Please post the exact lines of the file you changed so I can be sure I tested the same settings as yours.
Netgear WNDR3700v2
TP Link 1043ND v3
TP-Link TL-WDR3600 v1
Buffalo WZR-HP-G300NH2
WRT54G-TM

Waterspuwer
Posts: 32
Joined: Mon Nov 12, 2018 6:04 am

Re: Optimizing QoS for speed

Postby Waterspuwer » Sat Jun 01, 2019 2:16 pm

pbix wrote:
Waterspuwer wrote: At the moment the tc divisor used is 256. When I increase this to 512 (for both up and down), lo and behold, the speed is almost identical as without QoS (just 1-2 mbit difference, which is OK).


I made a test of this and cannot find any measurable difference in speed caused by changing the tc divisor. Please post the exact lines of the file you changed so I can be sure I tested the same settings as yours.

I've only modified qos_gargoyle file with this:

tc filter add dev $qos_interface parent $next_class_index: handle 1 flow divisor 512 map key nfct-src and 0xff

and rebooted the router. I've modified several of these qos_gargoyle files but I think the actual one that was needed was in /etc/init.d/qos_gargoyle.

It's still stable here with good speeds, speedtest almost immediately goes to 51.xx. With divisor on 256 it struggled to reach 47.xx (tried rebooting, the lot, it just wouldn't go any higher). Hope this helps, my knowledge of this is very little.

Waterspuwer
Posts: 32
Joined: Mon Nov 12, 2018 6:04 am

Re: Optimizing QoS for speed

Postby Waterspuwer » Sat Jun 15, 2019 9:32 am

Still rock stable with this change ( Uptime: 34 days, 0 hours, 28 minutes :D ), I've did a comparison with the file online and I've actually changed these lines:

tc qdisc add dev $qos_interface parent 1:$next_class_index handle $next_class_index:1 sfq headdrop limit $(($tbw/250)) $sfq_depth divisor 512

tc filter add dev $qos_interface parent $next_class_index: handle 1 flow divisor 512 map key nfct-src and 0xff

tc qdisc add dev imq0 parent 1:$next_class_index handle $next_class_index:1 sfq headdrop limit $(($tbw/250)) $sfq_depth divisor 512

tc filter add dev imq0 parent $next_class_index: handle 1 flow divisor 512 map key dst and 0xff

Basically everything from 256 to 512 and reboot.

I wanted to help with changing this for you but I don't understand how a "pull request" works. I downloaded the zip file and manually changed this file but how to submit that somewhere :?:


Return to “Monitoring / Quota / QoS Issues”

Who is online

Users browsing this forum: No registered users and 5 guests