User Tools

Site Tools


qos

Quality of Service (QOS)

Introduction

This is not a step by step guide on how to configure your QoS. I do not know how to write such a guide so instead I will write about how QoS operates in the hopes that armed with a little knowledge you will be successful in what you want to accomplish. There is a lot written here because unfortunately the subject is rather complex and full of nuances so it’s what’s required in my view.

The simplest thing you can do is to use the default setup. QoS will then make sure that all the devices connected on the LAN side of your router share the WAN equally. It will also ensure that web browsing gets priority over any other WAN activity. QoS can do much more than this but to get more you need to read more. To use the default you have to enter your upload line speed on the upload QoS page and check the enable box. Then you have to move to the download page, enable download QoS, enter your download line speed and check the enable for the active congestion controller. To learn more about how QoS works and what other things you can do with it read on.

Quality of Service (QoS) is the term used to talk about how we prioritize access to a limited resource. The limited resource in your routers world is access to the wide area network (WAN) link which connects the router to the Internet. This is almost always the most expensive and limited resource you have. When I use the term QoS do not be fooled into thinking that means everyone gets high quality of service. Quite the opposite is true. For some to experience high quality others must experience low quality. Perhaps a better term would be “Priority of Service” but, hey, I did not invent the term QoS and there are already enough synonymous terms out there so we will use the term QoS.

Let’s start by trying to identify when QoS might provide benefit for you. If for example you are already happy with your Internet experience then you do not need to use QoS or to read any further. However, if you play online games or use voice of internet technology then you know when someone else is watching youtube videos at the same time suddenly you get high pings and timeouts or very poor voice quality. Another case might be that your roommate runs his bittorrent application constantly and your web browsing suffers greatly because of it. Or you may administer a campsite and get complaints that some campers have good access but others are not getting their fair share. Having multiple people, devices or programs involved is when you can benefit from QoS. Fairness is the goal of the QoS system. We say QoS is being fair when it is able to enforce the rules you created for internet access. QoS is perhaps the only time in your life you get to decide what is fair.

An important fact about fairness is that it has a cost. In the case of QoS the cost will come in terms of reduced utilization of your WAN link. Lots of work has been invested in making this cost as low as possible but you are going to take a 5-10% hit on your WAN throughput to get fairness. This is the cost of QoS and if you do not want to pay it you can stop reading now. When you are having the problems that QoS can solve you will be more than willing to give up 5% of your bandwidth to solve them.

How about an analogy? I fly a lot and if you do too then you understand that when it’s time to board the airplane we do not just all rush at the door. The gate agent enforces the airline’s quality of service plan for each passenger. She starts by boarding handicapped folks, then we move onto the airline’s diamond members, then gold, silver and finally we arrive at the bulk class. In this analogy the gate agent is the router and the passengers are the packets of data trying to get through to the WAN. The point is that for those diamond members to experience high quality the average Joes must wait. When people are waiting to board we call this the 'saturated' condition because the door cannot accommodate any more people per second. One interesting lesson from this example is that if there is no one else waiting to board the plane when you show up it does not matter what your status is, you get to board next. The lesson here is that if the WAN is not saturated your QoS setup will not matter much, all packets get immediately transmitted.

Before we get into the particulars of how to configure your router we need to discuss the concept of packets, classes and rules. Data on the Ethernet travels in packets. Each packet is preceded by a header which contains information about its source, destination, type and length. In its journey through the QoS system it is never broken up and all bytes travel together. Packets range in size from 64 bytes to roughly 1500 bytes. Rules match packets to the classes. When a packet arrives the router uses the rules to figure out which class to route the packet into. For the most part rules are looking at data in the header of the packet to decide what to do. Classes are where packets wait before being allowed onto their final destination. How long they will wait is determined by how busy the WAN link is and the service level that is specified for the class they are assigned to. When the link indicates it is ready to accept another packet your router consults with the classes to determine which class is entitled to be next to transmit a packet. The above is the fundamental process your router goes through and is good to keep in mind when configuring your QoS setup.

If you are new at this then I recommend that you enter your link speeds on the QoS upload/download pages but otherwise just enable the default QoS configuration so that you can study how it works. In particular look at the “Status→Connection List” page and the QoS column shown there. Looking at this page we can see if your rules are working correctly or not. There is a rule in the default configuration for destination/source port 80. Port 80 is the well known port used by web browsers. If you open your web browser and navigate to www.google.com you will notice several new connections appear all marked for the ‘Normal’ class. This proves that our rule is working. Anytime you write a rule for QoS you should use this technique to ensure that it is working the way you intend. The most common error I see with new users is improper rule writing and then failure to check that their rule is operating properly.

Now a short word about connection tracking. When an application on your computer starts to communicate with another computer on the internet it is the normal course of events that the two computer exchange many packets back and forth. If you were to look at the header area of these packets you would see that they are nearly identical since the source, destination and type of packets are all the same. Your router calls this stream of packets a ‘connection’. The connection starts when the first packet is sent and ends when no more packets are being sent. To learn more about connection tracking I refer you to www.google.com. There is much written about this subject.

Rules can be written to match on the contents of the packet headers, the number of bytes which have passed through the connection and the data that appears in the first few packets of the connection (called L7 pattern matching). Rule writing is the most frustrating part of QoS. There are only limited ways that we can classify data reliably and often we must compromise what we want to do because of it. You have to think about how to classify your traffic based on what the rules can do which probably is not exactly what you want but can be very close in many circumstances.

QOS Example

Let’s start by an example in which we want a specific computer on your LAN to have higher priority internet access than those computers in the Normal class. First we really have only one way to identify a particular computer and that is by its IP address. To guarantee that this specific computer always gets the same IP address we visit the connection→DHCP page and assign it a static IP address based on its MAC address. Now every time this computer asks the router for an IP address the router will give it the same one. Finally we can write our rule using the IP address knowing that it will apply to this computer. When this computer sends a packet to the internet the source IP address will be this IP address. In addition whenever the response comes from the internet the destination address will be this IP address. So on the QoS download screen we use this IP as the destination address and on the QoS upload screen we use this IP address as the source IP address. This is an important point and often the cause of erroneous rules.

This idea of source/destination also applies to port numbers which are another way we have to classify packets.

Matching

Matching by IP address:

This was covered in some detail in the example above. On the download page we normally use the destination address and on the upload page we normally use the source address. Using the source address on the upload page is problematic because we usually do not know the address of the server we are talking to. IP addresses can be specified as a single address or in what is known as CIDR notation. CIDR notation allows you to specify a range of addresses (ie 192.168.1.8/30). I refer you again to Google to learn more about how to specify ranges with this type of notation.

Matching by port number

Matching on port number can be an effective way to classify packets when it can be applied. The most useful ports are port 80 & 443. These are the ports used by your web browser. A few other ports are also well known enough in function to be used effectively. Unfortunately many programs do not use well know ports but instead use ranges of port numbers or even random ports. When this occurs it is difficult to use port numbers to classify data. Popular file sharing applications fall into this category. Sometimes such applications have a settings page that will allow you to constraint which ports it will uses making it possible to write port based rules. But if the application does not have such a setting or you do not have access to the computer to change the setting you out of luck and cannot use this method. Ranges of ports can be specified separating the minimum and maximum port numbers with a ‘-‘ (ie. 20000-21000).

Matching on Packet length

You can match based on the length of the packet. The maximum length field will cause the rule to match on any packet smaller than the specified maximum length and the minimum length field will cause the rule to match on any packet larger than the specified minimum length. One use of this would be match on ‘ACK’ packets. These packets are typically 64 bytes long and while other packets may also be 64 bytes long you can get pretty good selection in this manner.

Protocol Matching

Packets from you LAN are either ICMP, TCP or UDP packets and since this information is contained in the packet header we can match on it. To learn more about protocols I again refer you to ‘The big G’. Often online game systems use UDP packets for their real-time play so using this fact alone you can give online play some priority over other programs running on the same computer.

Matching by connection bytes

Referring back to the concept of a connection discussed earlier it is possible to change the class of packet once the connection it belong to has accumulated a certain number of total transmitted bytes. Here we are not matching on the length of the individual packets but rather the total number of bytes that have passed through the connection. For example if a user was watching a video the number of bytes in the connection quickly accumulates as the video downloads even though each packet is only 1500 bytes. By setting a threshold you can change the class of data passing through the connection. So for example on a busy LAN such a user would be able to load the first part of the video quickly before the router changes his connection class and he gets slowed down. This is the method behind “Speedboost” that is marketed by at least one ISP in the USA. They hope that by giving their customers snappy page loads on their web browser and only limiting those data intensive downloads they will be rewards with more customers. ”Speedboast” in this case is a type of QoS implemented by the ISP.

You might think connection bytes would be a good way to deal with file sharing programs but be careful. There is a ongoing arms race between LAN administrators trying to balance the needs of all their users and the authors of these programs. They authors spend their lives dreaming of ways to circumvent bandwidth controls and port blocks so that their programs work effectively. So if the performance of a particular connection deteriorates too much such applications will just close it and open another fresh one. I think it’s possible that you can still get some benefit using connection bytes however if you do not cause the connection to slow down too much.

L7 Pattern Matching

This method of matching looks at the data (not the header) in the first few packets transmitted on a connection. The idea is that by looking at the data we can determine something about the application that is sending the data and classify appropriately. The trouble is that applications are changing constantly sending new and different types of data which makes any pattern matching problematic. Couple this with the rise of SSL (Encrypted data) and it becomes impossible to glean anything meaningful from the data passing through the router. As a result I do not recommend this type of matching and may remove it entirely from Gargoyle in the future. The authors of the L7 matching code seem to agree having pretty much given up maintain or improving the code.

So now we have covered all the individual ways we can match on packets. Gargoyle allows you to combine these together to form more complex rules. First we can select multiple methods in one rule. In this case all the individual elements must match before the rule is considered a match. Next we can write multiple rules. These are evaluated by the router in the order that they appear in the list starting at the top. Once a match is found the evaluation stops and the class determined by the matching rule is applied. Finally if none of the rules match we apply the default class. In all cases every packet must be placed in a class. Again I must emphasis how critical it is to test your rules by observing the connection list to make sure they are working as you intend. I am not aware of any limit on the number of rules you can write other than the amount of RAM memory and CPU performance your router has.

Classes

As soon as a packet arrived at the router the QoS rules are consulted to figure out what to do with it. Rules do not store packets, they only analyze them and sends them to the appropriate class. It is the class that contains the queue where the packet waits until its time comes. One attribute of Gargoyle classes requires no configuration and that is per IP sharing. Per IP sharing ensures that when multiple devices on your LAN are using the same class they will all get an equal share of whatever bandwidth that class has. In addition to this there are four user definable attributes for a class. We will look at those next.

Percentage Bandwidth (BW)

This is the percentage of the bandwidth the class will receive when the WAN link is saturated and all the classes have packets waiting to transmit. When I use the word ‘saturated’ here it means the link is completely busy and packets are queued up waiting to transmit. This concept is the same whether we are talking about the uplink or the downlink. If the link is not saturated percent bandwidth does not delay the packet and it gets sent immediately. So this parameter only has an effect when the link is saturated. To find out what happens when the link is saturated but all the classes do not have packets waiting refer to FAQ1. If you enter the classes such that all their percentages do not add to 100% Gargoyle will adjust them for you so that they do. The percentage we are talking about is percentage of the total available bandwidth entered in the field directly below the class table. Percent bandwidth is what most people think about when they think about fairness. Here are some examples of how this can be used.

Example 1 - College Life

Roommate A pays for the internet but agrees to allow roommates B & C use the internet when it is not busy. In this case we define two classes one with 1% BW and one with 99% BW. Now we use the IP address of B & C’s computer to direct their traffic into the 1% class and direct A’s traffic into his premium 99% class. In this case whenever the link is saturated A will get 99% and B&C will each get half of 1%. If A is not using the internet but the link is still saturated B&C will each get 50% since Gargoyle will allocate all available bandwidth to the waiting class and equally divide the shares by IP address. If the link is not saturated everyone gets what they ask for since the total is not enough to saturate the link. I think you can see that if roommate A was more benevolent he could increase the share fo the freeloaded to whatever he felt was appropriate.

Example 2 – Equal Rights

Here Dad pays but everyone in the house gets equal access to the internet when it is busy. This is a pretty simple case. Only once class is needed since Gargoyle already shares bandwidth by IP address. So the one class gets 100% BW and the link is shared equally between all active users. You will get this effect if you just enable the default QoS setup

Minimum Bandwidth

This is the bandwidth Gargoyle guarantees to provide the class. This is not quite the same as percent bandwidth in several cases. One important case is when Gargoyle’s active congestion controller is used. As we will see later the controller monitors the downlink performance and detects when the service your ISP is delivering varies. More on that later, for now suffice it to say that this is the attribute you should use when a fixed amount of bandwidth is needed by a particular application. The three most common cases I know are online gaming (ie XBOX Live), VoIP (SIP service or Skype) and streaming video (ie NetFlix). If you use any such applications you should determine what they require by first creating a rule and class for them using any parameters you want and then observing what they require while they are in use. Then set the minimum bandwidth to this value with a little margin (say 10%). Gargoyle will satisfy the minimum bandwidth requirements of all classes before working on the percent bandwidth requirement so use this parameter judiciously. When Gargoyle calculates the percentage bandwidth it includes whatever service it included under the minimum requirement in that calculation. Like percentage bandwidth, minimum bandwidth does not have any meaning unless the link is saturated. If all the minimum bandwidth of the active classes add to more than the available bandwidth then obviously they do not get the minimum they require but instead again proportionally less than the minimum. You should try to avoid this situation by using minimum bandwidth sparingly and only for the types of applications that need it.

Maximum Bandwidth

This parameter defines the maximum bandwidth that the class will have even if the link is available. This is the only class parameter which is not affected by link saturation. The class will not get more than this amount of bandwidth even if the WAN is not fully utilized.

MinimizeRTT (v1.5.4 & higher)

This attribute is only found on the QoS download page and only has an effect if you also enable ACC. Without ACC enabled there is no control of your RTT and no way to predict what it will be.

Some applications are greatly affected by the round trip times (RTT). RTTs can range approach 2-3 seconds on a congestion link which is a serious problem if you are interacting with something in real time. Let’s say you are talking to someone on Skype and it takes 2 seconds for them to hear you and another 2 seconds for you to hear their response. It’s going to be a frustrating conversation. The idea in Gargoyle is to provide low RTT when a class that needs it becomes active and high WAN utilization otherwise. When using the ACC RTTs will be controlled to around 150ms in active mode. If your application can accept this value you should not set the MinRTT flag. By setting this flag to you instruct the ACC to switch to minimum RTT mode when this class is active resulting in RTTs around 50% lower at the cost of your WAN link utilization dropping around 30%.

Example 3 - Netflix, online gaming and everyone else

In this case Mom like to watch her Netflix while Dad and Junior play games and Sis is on her facebook page. For the game play we create a class with the required minimum bandwidth to play by first observing how much bandwidth is required. We know that gaming requires low ping times so we create a ‘Gaming’ class, enter the minimum bandwidth and turn on the MinRTT flag. Next we observe how much bandwidth Mom is using while she it watching her Netflix video. Mom will get aggravated if her movie keeps pausing to get more data but we also know that watching a movie is not interactive and ping times are not important. So we create another class ‘NetFlix’ with the required minimum bandwidth but with the MinRTT flag turned off. We write rules to direct traffic from these applications to the appropriate classes and leave all other traffic in the Normal class.

Total Bandwidth Field

Below the class table is the total (upload/download) bandwidth fields. Proper setting of these fields is important to making QoS function properly. First determine the speeds your upload/download links can deliver. One way to do this is by turning QoS off and using http://www.speakeasy.net/speedtest/ with no other computers or applications trying to use your WAN link. Do it a couple of times and record separately the minimum upload and the minimum and maximum download speeds you obtain. If you run it twice for example and once you get 1Mbps and the second time you get 2Mbps then your minimum is 1Mbps and your maximum is 2Mbps.

Now start with the upload page entering 95% of the minimum upload speed you saw. If the minimum changes in time then you may need to further lower your minimum. This can be tedious but in my experience your upload speeds do not vary much so your first test will likely be sufficient.

On the download page the value you enter depends on if you are using ACC or not. When using ACC you enter the maximum download speed you saw. If you enter a value even 10% higher its OK too. ACC is going to automatically adjust the actual speed QoS between 12% of this number and 100% of this number. If you enter a number too high (say 2x your link speed) then you just lost some range because ACC will only adjust between 12% and 50% (all the numbers between 50% and 100% are too high) but it will still basically work.

If you are not using ACC then things get more complicated. You need to enter the minimum value of your downlink speed. Since downlink speeds can vary significantly in time, 20%-80% is not unusual, this can be an impossible task. If the downlink speed is set too high your QoS will simply not work. If set to low you will under utilize your link and feel cheated. For this reason I recommend that you use the ACC if you want your QoS to work.

Active Congestion Controller (ACC)

For QoS to function your router needs to know the maximum rate of data that can pass through the WAN link. You can experiment yourself with ACC off and see. If you put too high of a number in your downlink speed field QoS breaks down and stops working. If you put too low of a number you QoS works but your data rate is limited and you feel cheated. So there is a perfect number. If you can find that perfect number and it does not change ACC will not provide a benefit to you. However for most users there is no perfect number because the amount speed your ISP provides varies in time. This is where ACC can help you.

The active congestion controller continuously monitors your WAN download performance and adjusts the total downlink bandwidth in response to changes. By adjust I mean it will change the downlink speed QoS is using. The values it will use are between the amount you entered as the peak downlink speed and the 1/8 of this value. This is the dynamic range so to make the most of the available adjustment range it is important to enter the correct value of the peak download speed you can get and not more.

The amount of download bandwidth actually delivered by your ISP will vary as conditions on their network change. Like you your ISP has a WAN link to the internet. When their WAN link saturates they must limit the delivered performance to all their customers. The ACC detects the performance being delivered by monitoring the round trip times (RTT) of ping packets sent to your ISP’s gateway. The RTT corresponds roughly to the amount of data that is queued by the ISP and waiting to travel over the WAN link to your router. Controlling the amount of data that is queued is how the ACC makes QoS work. The whole concept of ACC requires that when saturated the ping times increase. This is commonly the case but if somehow this is not the case for your ISP connection then ACC cannot work and you should leave it off.

Let’s think about the queue that your ISP has for you. The amount of data waiting in the queue will grow or shrink based on several factors. A critical point will be when it grows to the point that no more data can fit in the queue. In that case the packet must be discarded by your ISP. This turns out to be bad for QoS. For QoS to function accurately it alone must decide which packets should be dropped. When the ACC is in active mode it is controlling the data flow such that this queue does not overflow and only your router’s QoS is dropping packets. Notice I said that the ACC controls the length of downlink queue. It cannot control the speed of your downlink. As long as the queue in front of your WAN downlink has enough data in it your downlink will be fully utilized. This is the goal of the ACC in active mode. To keep you downlink fully utilized by allowing the queue to grow to the necessary length but not too long that packets get dropped.

It turns out that the proper amount of data in your queue for full utilization would take about 100ms to drain away if no new packets arrived. This also means that each packet has to wait around 100ms in the queue before it proceeds. The affects the round trip time (RTT) of a packet exchange. If for example I send a ‘ping’ message to a computer on the internet the response will have to wait in this queue. Adding other overhead and you end up with a total RTT of around 150ms in this case. Some applications will be affected by an RTT of this length. So the ACC has another mode in which it reduces the average length of this queue to about half this value. This will lead to RTTs of around 75ms but at the cost of WAN utilization. When the minRTT mode is active WAN utilization drops by about 20% but the RTT is cut in half. This is a useful compromise which the ACC will make when a class becomes active which indicates it needs minimum RTTs and is indicated in the status display by the MinRTT mode.

The ACC considers a class to be active if the bandwidth used by that class exceeds 4kbps.

The following settings are available for the active congestion controller;

Enable active congestion control: Self explanatory.

Use non-standard ping target: The ACC needs to bounce pings off of a computer on the other side your WAN link in order to determine the amount of congestion which is present. By default the ACC uses the gateway assigned to the WAN port as this target, however, this is often not the appropriate target and must be changed. Unfortunately I am not sure how to robustly determine a good ping target for all cases so you will have to pay attention to this setting. If your ACC is not working this is the first thing to play with. Remember that the ACC controls the congestion between your router and this target so you need to pick something on the ISP side of your WAN connection. One target that I often use for experimentation is the OpenDNS server 208.67.222.222 so if the default is not working then try that one next. The optimum target to use will be one between your router and this server. You can use traceroute (Google it) to examine all the routers your traffic went through to get to OpenDNS. Then looking at the times listed in its output pick the one with the first significant time increase or play with several until you find the closest one to your router that works with ACC.

Use custom ping limit: This is the ping limit that ACC will use in MINRTT mode. Unless you check this box the ACC will automatically determine and appropriate ping limit based on your link speed. The algorithm for computing the ping limit should be robust but in case it comes ups with a bogus value on its own this setting can be used to remedy the problem. This value will normally be between 40-90ms and can be observed when the ACC is in MINRTT mode as the ping time limit.

Router Performance

Your router has a CPU and that CPU has a limit on how much data is can process per second. Almost nothing written on this page will be true if you are trying to exceed the limitations of your router. Quoting Clint Eastwood “A man needs to know his limitations” so know your router's limitations. When you are exceeding your throughput limitation you will see “CPU Load Averages” on the Status screen approach 1.0 and strange unexplained things happening.

This will happen somewhere between 10Mbps and 500Mbps depending on your router and what Gargoyle features you are using. To use Gargoyle you must reduce the download/upload link speeds on your QoS pages so that your CPU never gets near the 1.0 limit even under fully saturated conditions.

Bandwidth monitoring and QoS are the two features that take the most processing for your router. If you turn them off you will get more through put but of course you will lose many of the reasons you are trying to use Gargoyle.

Don't complain on the forum that your router's native firmware gives you better throughput than Gargoyle firmware does. With Gargoyle you are getting features and stability which you do not have with your native firmware. If you cannot achieve the speeds you want get a faster router.


All routers have a maximum processing speed for the WAN link. If you lower your total WAN bandwidth (upload plus download) to below this maximum on the Gargoyle QoS screens then Gargoyle will throttle your throughput and all your Gargoyle functions will work properly. This may result in you not being able to utilize the full bandwidth your ISP provides you but you will have stable and predictable performance.

Selecting a router that has enough horsepower to handle your full bandwidth is important if you really want to use all your available bandwidth. Stock firmware which comes with your modem will usually provide higher throughput than Gargoyle. The reason for this is simple. The stock firmware does not have the advanced features of Gargoyle. Especially QoS and Bandwidth monitoring. These are the features that require CPU horsepower. If you turn them off in Gargoyle you will also see a high throughput capability.

Like a car top-end speed is not the only desirable feature. The many other features that you use everyday are usually what you should concentrate on and these are what Gargoyle provides.

FAQS

Q. I have three classes defined with 15, 35 & 60% link share. How is the link shared if only the first two classes are active and the WAN link is saturated by them.

A.If the link is saturated then it is divided according to the percentages of the active classes. So in this case the split becomes 15/(15+35)=30% and 35/(35+15)= 70%

What is per IP sharing?

Q. If I have data from several computers directed into the same class how is the class bandwidth shared between them.

A. Prior to Gargoyle v1.5.4 there was no control on this and the sharing could not be predicted. Starting with v1.5.4 Gargoyle shares bandwidth equally between different IP addresses directed to the same class. This makes QoS setups for large LANs much easier. For example if merely want all computers on your network to have the same bandwidth you need to only create a default class, delete all your rules and enable QoS. Per IP sharing within a class requires no configuration and cannot be turned off. If you want a particular IP to be treated special you need to make exclusive classes for it and rules to match.

enable minimize RTT

Q. I think I will just enable minimize RTT on all my classes, minimum RTT is good right?

A. Well minimum RTT is good but you have to pay with your download WAN utilization. The reason for this is the same reason we have lines everywhere in our lives. If you go into the bank the only way you can have a minimum RTT (ie get in and out) would be for either there to be no one waiting in line in front of you or to have the bank president come out when he saw you and move you to the head of the line. Not fair to the other customers but that’s QoS.

In the case of the WAN down link there is no way to re-order packets that are queued to travel over the WAN downlink so we are left with keeping the line short as the only means

Use MAC addresses in my rules

Q. I want to use MAC addresses in my rules because it’s easier for me. Why can I not use MAC addresses?

B. The MAC address of the devices on your LAN are not available to QoS. In the Linux routing architecture this information is stripped from the packets before they get to the code executing QoS. You can approximate the behavior you want by assigning a fixed IP address to the device with the MAC in question. Then you can write rules against this IP address.

QoS for traffic between computers

Q. I want to use QoS for traffic between computers on my LAN. How can I do that?

A. The short answer is that you cannot do this, QoS only operates on traffic between your router and ISP (the WAN link). In most routers LANLAN traffic is handled by a switch in hardware and therefore never seen by your router software. This is also why LANLAN traffic can have much higher bandwidth than LAN↔WAN traffic. Since QoS never sees traffic there is nothing that it can do. This applies to all Wifi traffic as well. QoS can do nothing to help when the congestion which is caused by an overloaded radio link. I advise serious gamers to use a hardwired connection between their console/PC and the router so they will not be affected by unpredictable traffic jams on the radio link.

Number of rules and classes

Q. What is the limit on the number of rules and classes I can have?

A. You are limited by your routers available memory and CPU power. I do not know these limits which will vary by router but these will likely be hit before you reach the theoretical limit of rules and classes. The theoretical limit on the number of rules I am not sure of but it will be at least in the thousands and possibly in the millions. The theoretical limit on the number of classes is 125 for upload and another 125 for download.

Class 'Priority'

Q. How can I get my special class to get ‘Priority’ over other classes?

A. In Gargoyle we use the concept of bandwidth to allocate the WAN resource and not the concept of ‘priority’. One reason is the the word 'priority' is ambiguous in meaning. It could mean that Priority packets always get transmitted before the lower priority classes. You can achieve this behavior in Gargoyle by setting the minimum bandwidth of the priority class equal to your link speed and setting the minimum bandwidth of all other classes to zero. In the real world you will find that that Gargoyle's %BW, minBW and maxBW concepts are more flexible so I encourage you to think of your problem in these terms and abandon the concept of 'Priority'.

QoS not working

Q. QoS is not working what could be wrong.

A. As you can tell by the length of this article QoS is a complicated subject and there are many things that could be wrong. But the basics you should check are first if the connections are being classified correctly by your rules, then is my router's CPU being overloaded. Check these with ACC off and your link speeds set to 50% of what you think your actual speeds are. If things are working there at least your rules and classes are correct. Move on to high speeds and using the ACC.

Q. Why does ACC keep lowering my link limit, I have to reset to get it back.

A. ACC lowers your link limit until the filtered ping time falls under the target ping times. So if your link limits is being lowered either your ISP performance has dropped (the usual case) or your target ping time is too aggressive.

Common Myths

Statement: QoS cannot control the download speed because the router cannot stop data from coming in the WAN link.

Rebuttal: False my friend. It can and does.

We never want to drop packets

Statement: We never want to drop packets once they have already arrived from the WAN. That would be a waste of perfectly good bandwidth.

Rebuttal: Dropping packets is fundamental to how the internet works and the only way we have to signal the sender of the packets to slow down. Packets are going to be dropped on any connection; the only question is which router along the way will be the one to drop them. The amount of bandwidth lost by dropping packets is a few percent a most.

Statement: I have a really fast xxxMbps WAN link. I do not need QoS.

Rebuttal: The speed of your WAN link has nothing to do with your need for QoS. Your WAN link will saturate because even a single PC can download at 100Mbps. Your need for QoS is only dictated by the things I mentioned at the top of this article. That is if you want to enforce fairness on your LAN.

My ISP uses technology X so I do not need QoS

Statement: My ISP uses technology (fill in the blank here). My WAN speed does not vary so I do not need the active congestion controller.

Rebuttal: Your need for the ACC has nothing to do with what type of WAN connection you have. Trust me when I say that all WAN connections are subject to throttling by an ISP. Just put yourself in their shoes for a moment. They have 10000 customers each with a 20Mbps cable modem on one side of their system and on the other side they have a 1Gbps link to the internet. If you do the math (10000*20Mbps/ 1Gbps) = 200 so in this example they are oversubscribed by a factor of 200! Clearly they would have to throttle your connection if everyone on their system started downloading Ubuntu.

Whenever I turn on the ACC it really slows my download speeds

Statement: When ever I turn on the ACC it really slows my download speeds.

Rebuttal: The ACC manages the download link limit. Do not confuse the link limit with your actual link load. This link limit has no effect on you unless your load is actually the same as the limit. When these two are the same it means your WAN link is getting congested and simply cannot go any faster. The ACC also does not become active unless there is enough load to warrant it. You should also be aware that the ACC reacts slowly to changes in your ISPs performance so sudden burst of download data (like some DSL speed test programs use) will not move the link limit much.

'ACK' packets are special I want them treated with higher priority

Statement: It’s necessary to handle ‘ACK’ packets with special ‘priority’ so my uplink does not affect my downlink when I am using HTTP.

Rebuttal: It is not necessary to handle ACK packets different than other packets. As with all packets you need to think about how to allocate WAN resource for them. ACK packets are typically around 54 bytes long and the maximum packets around 1500 bytes. This ratio is 54/1500 = 3.6%. Any class which includes TCP traffic in the download needs a matching upload class that has allocation greater than 3.6% of the download bandwidth in bps. Many WAN links are asymmetrical in that the download is much faster than the upload so this affects the calculation.

Example: On a 10Mbps down/1Mbps up link we want HTTP traffic to consume 50% of the WAN down link when saturated. Let’s say our HTTP traffic is routed into the Normal class on both upload and download. The %BW in the download Normal class is simply 50%. On the uplink we must account for the fact that the link is asymmetrical so we allocate 3.6% * 10Mbps/1Mbps = 36% to the Normal class in the uplink. With this setup if either the uplink or downlink is saturated the HTTP traffic will be allocated not less than 5Mbps in the downlink.

Whenever I use QoS my ping times increase

Statement: I don't use QoS because it makes my ping times increase and I lose games online. I need really good ping times to be competitive.

Rebuttal: Ah, QoS is not responsible for the beat down you just received. If your WAN link is not saturated then packets will be delayed not more than 1 ms on their way through your router. If your link gets saturated and your rules are not written correctly you can get delays that can approach 150ms. Check your rules to make sure your gaming traffic is going to the class you intend and that the class is not getting overloaded.

qos.txt · Last modified: 2017/12/29 01:21 by ispyisail