Page 2 of 3

Re: Tracking website usage similar to data quotas

Posted: Wed Feb 19, 2020 12:59 am
by Lantis
Did you update your package repository?

Code: Select all

gpkg update

Re: Tracking website usage similar to data quotas

Posted: Wed Feb 19, 2020 3:08 am
by RomanHK
Lantis wrote:Did you update your package repository?

Code: Select all

gpkg update
This may not help :? . I also have problems with this and I fixed it by adding a line to /etc/opkg.conf: viewtopic.php?f=14&t=12138&p=54478#p54478

Re: Tracking website usage similar to data quotas

Posted: Wed Feb 19, 2020 3:35 am
by Lantis
Right, so in such a case the user needs to compile a custom firmware, as the repository you suggest is not guaranteed (and is often not ever) compatible with Gargoyle because it is kernel dependent.

Gargoyle has its own kernel dependent repository included, but in this case does not include the required library.

You can try adding it, but it is not included because it does not work reliably by default.

Re: Tracking website usage similar to data quotas

Posted: Wed Feb 19, 2020 9:15 am
by agrohe21
I also have problems with this and I fixed it by adding a line to /etc/opkg.conf: viewtopic.php?f=14&t=12138&p=54478#p54478
I do not see what line you added in that post. Given this may be custom stuff, I may just go with a separate VM running squid instead of on the Gargoyle box.

Re: Tracking website usage similar to data quotas

Posted: Wed Feb 19, 2020 5:35 pm
by RomanHK
This should work for all architectures:

Code: Select all

cat /etc/opkg/distfeeds.conf >> /etc/opkg.conf
and now we can install the squid package:

Code: Select all

gpkg update
gpkg install squid
:!: As @Lantis says, this procedure may not work because of a kernel change - my last clean install was three months ago, at that point it worked.

Re: Tracking website usage similar to data quotas

Posted: Thu Feb 20, 2020 11:22 am
by agrohe21
Lantis wrote:You could possibly parse the output of

Code: Select all

cat /proc/webmon_recent_domains
This output provides timestamps similar to this: 1582043139. What format is that? and how might I convert that to a regular date/time/timestamp.

Re: Tracking website usage similar to data quotas

Posted: Thu Feb 20, 2020 4:38 pm
by Lantis
I believe that will be seconds since epoch.
https://www.epochconverter.com/

Just a count of seconds since 1st Jan 1970 (I think).

Re: Tracking website usage similar to data quotas

Posted: Thu Feb 20, 2020 9:57 pm
by agrohe21
thanks. I found that earlier. My issue seemed to be in the script I was processing it with.

In Javascript, I tried to create a new Date(epoch). but you have to create a new Date(0) and then .setUTCSeconds(epoch). so that is working now. thx

Re: Tracking website usage similar to data quotas

Posted: Fri Feb 28, 2020 2:05 pm
by agrohe21
RomanHK wrote:
Wed Feb 19, 2020 5:35 pm
This should work for all architectures:

Code: Select all

cat /etc/opkg/distfeeds.conf >> /etc/opkg.conf
and now we can install the squid package:

Code: Select all

gpkg update
gpkg install squid
:!: As @Lantis says, this procedure may not work because of a kernel change - my last clean install was three months ago, at that point it worked.
I have installed SQuid but it did not log anything by default and was blocking my netflix streams. I would like to not block ANYTHING with squid but just log everything that happens so I can inspect the data in the logs.

@RomanHK, do you have your Gargoyle squid.conf file to share?

Re: Tracking website usage similar to data quotas

Posted: Fri Feb 28, 2020 2:38 pm
by RomanHK
Unfortunately, I'm not going to help you with the configuration :( . I only use SQUID as a gateway through Adblock and I use a slightly modified configuration without logging.

You need to ask @Lantis what he means and how to start logging. Maybe to get started manual here: http://www.squid-cache.org/Doc/config/

P.S. ;) Congratulations on running the SQUID proxy server has a customized configuration that I use:

Code: Select all

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 0.0.0.1-0.255.255.255	# RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8		# RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10		# RFC 6598 shared address space (CGN)
acl localhet src 169.254.0.0/16 	# RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12		# RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16		# RFC 1918 local private network (LAN)
acl localnet src fc00::/7       	# RFC 4193 local private network range
acl localnet src fe80::/10      	# RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 70		# gopher
acl Safe_ports port 210		# wais
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 591		# filemaker
acl Safe_ports port 777		# multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern .		0	20%	4320

# Squid user
cache_effective_user squid

#
# Logs, best to use only for debugging as they can become very large
#

access_log none  # daemon:/tmp/squid_access.log
cache_log /dev/null  # /tmp/squid_cache.log

dns_nameservers 193.17.47.1 185.43.135.1
cache_mem 16 MB
netdb_filename none