Understanding hfsc?

Report issues relating to bandwith monitoring, bandwidth quotas or QoS in this forum.

Moderator: Moderators

Post Reply
shm0
Posts: 67
Joined: Sat Sep 19, 2015 10:06 am

Understanding hfsc?

Post by shm0 »

Hi :D

I hope someone can help me understanding hfsc a bit better?
Those service curves...

So we have m1 d m2 ?
m1 describes burst rate?
d describes the maximum time burst rate can be used?
m2 guaranteed rate ?

Is there a really difference between linkshare and realtime service curves?
For normal linkshare, burst is used on the start of connection?
For rt, burst can be used even during a connection if the class is allowed to use the burst again (calculated by hfsc) ?


If this is the case.
I thought about chaning my gaming rt class as follows:
Available Upload Speed 19 Mbit.
Assumed packet size: 1500byte (i know quite large for gaming class)
Target Delay for packet: 1ms
Game needs ~256kbit

So i hope i get the math right :roll:
(1500byte*8)*(1000ms/1ms)=12000000bit/s

So to transfer one 1500byte packet in 1ms i would need 12 Mbit right?
Which gives me:
rt m1 12000kbit d 1ms m2 256kbit

Does this make any sense?

//Edit
I tought about it a little and came up with this:
This goes into the qos_gargoyle init script.
Upload Bandwidth configured in gui 19000kbit/s.

Code: Select all

#How my real time classes
rt_classes=2

#Average Number of Clients using rt classes
rt_clients=2

#Maximum rt bandwidth (in percent)
rt_percent=80

#Set avg paket size
avg_pkt_size=500

#Get max rt bandwidth in bit/s
rt_bandwidth=$(awk "BEGIN {print $total_upload_bandwidth*$rt_percent/100}")

#Get bandwidth per class
rt_bandwidth_class=$(awk "BEGIN {print $rt_bandwidth/$rt_classes}")
 

#Get delay in ms
delay=$(awk "BEGIN {print $avg_pkt_size*8*1000*$rt_clients/$rt_bandwidth_class/1000}")
delay=$(printf '%.*f\n' 1 $delay)
			


if [ "$min_bandwidth" -gt 0 ] ; then
ll_str=" rt m1 ${rt_bandwidth_class}kbit d ${delay}ms m2 ${min_bandwidth}kbit"
fi

This very basic code.
It gives the following results for 2 rt classes with average 2 clients.
Configured avg paket size: 500byte
Max Burst Bandwidth per client: 3800 kbit/s
Burst time (delay): 1.1ms

//Edit
After some sleep. i thought a bit further.
Why not do it the other way around and target for specific delay and calculate burst speed from there?

Also in the hfsc paper it states:
Consider the two-level class hierarchy shown in Figure 10. The value under each class represents the bandwidth guaranteed to that class. In our experiment, the audio session sends 160 byte packets every 20 ms, while the video session sends 8 KB packets every 33 ms. All the other sessions send 4 KB packets and the FTP session is continuously backlogged.

To demonstrate H-FSC’s ability to ensure low delay for real-time connections, we target for a 5 ms delay for the audio session, and a 10 ms delay for the video session. To achieve these objectives, we assign to the audio session the service curve Sa = (u-max=160 bytes, d-max=5 ms, r=64 Kbps), and to the video session the service curve Sv=(u-max=8 KB, d-max=10 ms, r=2 Mbps). Also, in order to pass the admission control test, we assign to the FTP session the service curve SFTP=(u-max=4 KB, d-max=16.25 ms, r=5 Mbps). The service curves of all the other sessions and classes
are linear.
So to archive the same goal as above:
rt umax 500b dmax 1ms rate 256kbit
or?

ispyisail
Moderator
Posts: 5194
Joined: Mon Apr 06, 2009 3:15 am
Location: New Zealand

Re: Understanding hfsc?

Post by ispyisail »

so you found the solution?

shm0
Posts: 67
Joined: Sat Sep 19, 2015 10:06 am

Re: Understanding hfsc?

Post by shm0 »

The problem is there is no general solution i guess.
Because each application uses different paket size and sending interval.

Creating a two part service curve for voip is quite easy because it uses fixed paket size and sending interval.

But for games this not applicable. Because they vary their paket size and rate according to how much is going on ingame.

So my simple idea was just assign 80% of bandwidth and take worst case paket size.
It lowered my ping by 2-3ms.
Not much but a small improved.

But what makes me wonder is...
The MinRTT detection that is used by the acc in form a two part service.

ls m1 ${m2}Mbit d 50ms m2 ${m2}Mbit

Doesnt this create 50ms delay if the class is using its full bandwidth?

Post Reply