I was taking a look at ACC, and it completely falls flat on its face on my network. For one, I'm not entirely sure what the auto IP is targeting is doing. Is it supposed to target my cable modem (public IP), or my local CMTS? Logically, I would think the CMTS, but ACC says "WAN gateway", which would be be the modem. Currently, the auto targeting pings my CMTS and when turned on, it'll happily throttle my 50 mbps connection down to ~16,000 when only partially active, and all the way to minimum 15% with the min-RTT class active.
How would this technology be effective on a cable network? Especially with the overbuffering that most cable networks do. That's basically how Comcast can deliver 110%+ throughput 24/7. Doesn't matter how much I throttle my connection, I'm still at the mercy of my ISP's bufferfloat. I've had better experiences with CoDel on DD-WRT but unfortunately, the rest of the setup was buggy (like say, my Wi-Fi wouldn't work, or it wouldn't play nice with some of my clients, or the bridge wouldn't work, etc.) It seems like every time the team over there fixes one thing, another thing breaks.
Gargoyle works great until I get into a bufferfloat situation, whereby the current algorithm attempts to compensate by ratcheting down everything aggressively instead starting to slowly drop my low-priority TCP packets until maximum utilization (and by extension low buffer delay) is achieved. With ACC disabled, this can be seen as incoming bandwidth will spike up to 85,000 kbps and then rapidly fall to 35,000 as the router attempts to compensate. A vicious cycle ensues. FTP and bitTorrent will have no problem with that, obviously. But everything else breaks and eventually gives up. Luckily, my upstream connection is stable as hell. No issues there. But it is my local CMTS that's buffering downloads; not the modem. In other words, it doesn't matter what I set the rate limiter to. The issue is present at any speed and is completely random. It can happen at 4 AM in the morning or at 10 PM. I should mention that this issue only began once they "doubled everyone's speeds for free" and started using "powerboost".
BTW, I have a WNDR3700v1 with both downstream and upstream QoS active. CPU utilization at load rarely crosses 2%. My 50/10 mbps connection is limited to 56,000 kbps DL and 11,000 kbps UL. Before you tell me I set it up wrong, I should mention that multiple peak hour ShaperProbes (directly plugged into my modem via GigE) over several days resulted in a consistent ~11400 kbps UL and ~58,000 kbps DL. My UL was nearly the same every time, while the DL would bounce between ~56,000 and ~61,000 kbps. I just set the limits a little bit less than the lowest ShaperProbe I noticed over that week long period. That is not a boosted rate. My UL doesn't boost, but my DL boosts to ~130,000 kbps. If I ran the ShaperProbe when I was having the bufferfloat issue, it would give up due to downstream packet loss. Finally, my Motorola 6120 is working great and all signal levels are peachy.
Also, need some help understanding IPv6. Comcast is now fully IPv6 only in my area and since Gargoyle doesn't support IPv6, doesn't that mean all of my traffic has to be encapsulated? Won't that affect speeds? Would that have an effect when pinging the CMTS?
With all that said, I've tried DD-WRT, the stock firmware, and Gargoyle. Gargoyle is by far the best. Not perfect, but pretty damn good. It just seems like the implementation was geared towards an ADSL connection. It is either Comcast or dial-up for me, so I'm stuck with finding something that works for my current situation. So where should I turn?
I am fairly technically inclined, but I'm also a CS student, so I like to avoid the extra work unless I'm getting a kick out of it. Networking can be is interesting, but trying to configure a router via a CLI isn't my idea of exciting.
