|
http://www.visolve.com/squid/squid24s1/delaypool.php
DELAYPOOL PARAMETERS
Usage
| delay_pools numbers
| Description
This represents the number of delay pools to be used. For example, if you have one- class 2 delay pool and one- class 3 delay pool, you have a total of 2 delay pools. Delay pools allow you to limit traffic for clients or client groups, with various features. Objects retrieved from the cache will not be delayed. Only the object from the server will be delayed.
Example
delay_pools 2 # 2 Delay pools
Caution
To enable this option, you must use --enable-delay-pools with the # configure script.
|
Usage
| delay_class number (delay-pool number), number (delay class)
| Description
This defines the class of each delay pool. There must be exactly one delay_class line for each delay pool. For example, to define two delay pools, one of class 2 and one of class 3, the settings will be like as given in the example. For details on the delay pool classes see Glossary.
Example
delay_pools 2 # 2 delay pools
delay_class 1 2 # pool 1 is a class 2 pool
delay_class 2 3 # pool 2 is a class 3 pool
Caution
To enable this option, you must use --enable-delay-pools with the # configure script.
|
Usage
| delay_access allow acl name|deny acl name
| Description
This is used to determine which delay pool a request falls into. The first matched delay pool is always used, i.e., if a request falls into delay pool number one, no more delay are checked, otherwise the rest are checked in order of their delay pool number until they have all been checked. For example, if you want pool_1_acl in delay pool 1 and pool_2_acl in delay pool 2, then look at the example below.
Example
To specify which pool a client falls into, create ACLs which specifies the ip ranges for each pool, and use the following:
delay_access 1 allow pool_1_acl
delay_access 1 deny all
delay_access 2 allow pool_2_acl
delay_access 2 deny all
Caution
To enable this option, you must use --enable-delay-pools with the # configure script.
|
Tag Name
| delay_parameters
|
Usage
| delay_parameters pool aggregate (for delay_class 1 networks)
delay_parameters pool aggregate individual (for delay_class 2 networks)
delay_parameters pool aggregate network individual (for delay_class 3 networks)
| Description
This defines the parameters for a delay pool. Each delay pool has number of "buckets" associated with it, as explained in the description of delay_class. For a class 1,2 and 3 delay pool, the syntax is given in the usage. For a Glossary of term related to delay_pool see Glossary .
Example 1:
acl tech src 192.168.0.1-192.168.0.20/32
acl no_hotmail url_regex -i hotmail
acl all src 0.0.0.0/0.0.0.0
delay_pools 1 #Number of delay_pool 1
delay_class 1 1 #pool 1 is a delay_class 1
delay_parameters 1 100/100
delay_access 1 allow no_hotmail !tech
In the above example, hotmail users are limited to the speed specified in the delay_class. IP's in the ACL tech are allowed in the normal bandwidth. You can see the usage of bandwidth through cachemgr.cgi.
Example 2:
acl all src 0.0.0.0/0.0.0.0 # might already be defined
delay_pools 1
delay_class 1 1
delay_access 1 allow all
delay_parameters 1 64000/64000 # 512 kbits == 64 kbytes per second
The above example tells that the squid is limited to the bandwidth of 512k. For ACL you can go Here .
Caution
To enable this option, you must use --enable-delay-pools with the # configure script.
|
Tag Name
| delay_initial_bucket_level(percent, 0-100)
|
Usage
| delay_initial_bucket_level bytes
| Description
The initial bucket percentage is used to determine how much is put in each bucket when squid starts, is reconfigured, or first notices a host accessing it (in class 2 and class 3, individual hosts and networks only have buckets associated with them once they have been "seen" by squid).
Default
| delay_initial_bucket_level 50(bytes)
| Caution
This option is only available if Squid is rebuilt with the --enable-delaypools option.
|
Tag Name
| incoming_icp_average
incoming_http_average
incoming_dns_average
min_icp_poll_cnt
min_dns_poll_cnt
min_http_poll_cnt
|
Description
This describes the algorithms used for the above tags.
INCOMING sockets are the ICP and HTTP ports. We need to check these fairly regularly, but how often? When the load increases, we want to check the incoming sockets more often. If we have a lot of incoming ICP, then we need to check these sockets more than if we just have HTTP. The variables 'incoming_icp_interval' and 'incoming_http_interval'determine how many normal I/O events to process before checking incoming sockets again. Note we store the incoming_interval multiplied by a factor of (2^INCOMING_FACTOR) to have some pseudo-floating point precision.
The variable 'icp_io_events' and 'http_io_events' counts how many normal I/O events have been processed since the last check on the incoming sockets. When io_events >incoming_interval, its time to check incoming sockets.
Every time we check incoming sockets, we count how many new messages or connections were processed. This is used to adjust the incoming_interval for the next iteration. The new incoming_interval is calculated as the current incoming_interval plus what we would like to see as an average number of events minus the number of events just processed.
incoming_interval = incoming_interval + target_average - number_of_events_processed.
There are separate incoming_interval counters for both HTTP and ICP events. You can see the current values of the incoming_interval, as well as a histogram of 'incoming_events' by asking the cache manager for 'comm_incoming', e.g.:
% ./client mgr:comm_incoming
Default
| incoming_icp_average 6
incoming_http_average
4 incoming_dns_average
4 min_icp_poll_cnt 8
min_dns_poll_cnt 8
min_http_poll_cnt 8
| Caution
-We have MAX_INCOMING_INTEGER as a magic upper limit on incoming_interval for both types of sockets. At the largest value the cache will effectively be idling.
-The higher the INCOMING_FACTOR, the slower the algorithm will respond to load spikes/increases/decreases in demand. A value between 3 and 8 is recommended.
|
Tag Name
| max_open_disk_fds
|
Usage
| max_open_disk_fds number
| Description
This specifies the maximum file descriptor squid can use to open files. To avoid having disk as the I/O bottleneck, Squid can optionally bypass the on-disk cache if more than this amount of disk file descriptors are open.
A value of 0 indicates no limit
Default
| max_open_disk_fds 0
|
|
Usage
| offline_mode on|off
| Description
Enable this option and Squid will never try to validate cached objects. offline_mode gives access to more cached information than the proposed feature would allow (stale cached versions, where the origin server should have been contacted).
|
Usage
| uri_whitespace options
| Description
The action to be done on the requests that have whitespace characters in the URI is decided with this tag. Available options:
strip:
The whitespace characters are stripped out of the URL. This is the behavior recommended by RFC2616.
deny:
The request is denied. The user receives an "Invalid Request" message.
allow:
The request is allowed and the URI is not changed. The whitespace characters remain in the URI. Note the whitespace is passed to redirector processes if they are in use.
Encode:
The request is allowed and the whitespace characters are encoded according to RFC1738. This could be considered a violation of the HTTP/1.1 RFC because proxies are not allowed to rewrite URI's.
chop:
The request is allowed and the URI is chopped at the first whitespace. This might also be considered as a violation.
Default
| uri_whitespace strip
| Example
uri_whitespace chop
|
Usage
| broken_posts allow|deny acl name
| Description
A list of ACL elements which, if matched, causes Squid to send a extra CRLF pair after the body of a PUT/POST request. Some HTTP servers have broken implementations of PUT/POST, and rely on an extra CRLF pair sent by some WWW clients.
Example
acl buggy_server url_regex ^http://....
broken_posts allow buggy_server
|
Usage
| mcast_miss_addr enable|disable
| Description
If you enable this option, every "cache miss" URL will be sent out on the specified multicast address. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_STREAM option.
Default
| mcast_miss_addr 255.255.255.255
| Caution
This option should be enabled only after a careful understanding. See multicast
|
Usage
| mcast_miss_ttl time-units
| Description
This is the time-to-live value for packets multicasted when multicasting off cache miss URLs is enabled. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_TTL option.
Default
| mcast_miss_ttl 16
|
|
Usage
| mcast_miss_port port no
| Description
This is the port number to be used in conjunction with 'mcast_miss_addr'. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_TTL option.
Default
| mcast_miss_port 3135
| Caution
This tag is used only when you enable mcast_miss_addr
|
Tag Name
| mcast_miss_encode_key
|
Usage
| mcast_miss_encode_key enable|disable
| Description
The URLs that are sent in the multicast miss stream are encrypted. This is the encryption key. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_STREAM option.
Default
| mcast_miss_encode_key XXXXXXXXXXXXXXX
|
|
Tag Name
| nonhierarchical_direct
|
Usage
| nonhierarchical_direct on|off
| Description
By default, Squid will send any non-hierarchical requests (matching hierarchy_stoplist or not cacheable request type) direct to origin servers. If you set this to off, then Squid will prefer to send these requests to parents. Note that in most configurations, by turning this off you will only add latency to this request without any improvement in global hit ratio. If you are inside a firewall then see never_direct instead of this directive.
Default
| nonhierarchical_direct on
|
|
Usage
| prefer_direct on|off
| Description
Normally Squid tries to use parents for most requests. If you by some reason like it to first try going direct and only use a parent if going direct fails then set this to off.
By combining non hierarchical_direct off and prefer_direct on you can set up Squid to use a parent as a backup path if going direct fails.
Default
| prefer_direct off
|
|
Tag Name
| strip_query_terms
|
Usage
| strip_query_terms on|off
| Description
Squid by default does not log query parameters. These parameters are however forwarded to the server verbatim. If we want to enable logging of query parameters, the strip_query_terms directive can be used .
By default, Squid strips query terms from requested URLs before logging. This protects your user's privacy
Default
| strip_query_terms on
|
|
Usage
| coredump_dir directory
| Description
By default Squid leaves core files in the first cache_dir directory. If you set 'coredump_dir' to a directory that exists,Squid will chdir() to that directory at startup and coredump files will be left there.
Example
coredump_dir /usr/local
|
Tag Name
| redirector_bypass
|
Usage
| redirector_bypass on|off
| Description
When this is 'on', a request will not go through the redirector if all redirectors are busy. If this is 'off' and the redirector queue grows too large, Squid will exit with a FATAL error and ask you to increase the number of redirectors. You should only enable this if the redirectors are not critical to your caching system.If you use redirectors for access control, and you enable this option,then users may have access to pages that they should not be allowed to request.
Default
| redirector_bypass off
|
|
Tag Name
| digest_generation
|
Usage
| digest_generation on|off
| Description
This controls whether the server will generate a Cache Digest of its contents. By default, Cache Digest generation is enabled if Squid is compiled with USE_CACHE_DIGESTS defined. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
| digest_generation on
|
|
Tag Name
| ignore_unknown_nameservers
|
Usage
| ignore_unknown_nameservers on|off
| Description
By default Squid checks that DNS responses are received from the same IP addresses that they are sent to. If they don't match, Squid ignores the response and writes a warning message to cache.log. You can allow responses from unknown nameservers by setting this option to 'off'.
Default
| ignore_unknown_nameservers on
|
|
Tag Name
| digest_bits_per_entry
|
Usage
| digest_bits_per_entry number
| Description
This is the number of bits of the server's Cache Digest, which will be associated with the Digest entry for a given HTTP Method and URL (public key) combination. The default is 5. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
| digest_bits_per_entry 5
|
|
Tag Name
| digest_rebuild_period
|
Usage
| digest_rebuild_period time-units
| Description
This is the number of seconds between Cache Digest rebuilds. By default the server's Digest is rebuilt every hour. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
| digest_rebuild_period 1 hour
|
|
Tag Name
| digest_rewrite_period
|
Usage
| digest_rewrite_period time-units
| Description
This is the number of seconds between Cache Digest writes to disk. By default the server's Digest is written to disk every hour. This option is only available if Squid is rebuilt with the--enable-cache-digests option.
Default
| digest_rewrite_period 1 hour
|
|
Tag Name
| digest_swapout_chunk_size
|
Usage
| digest_swapout_chunk_size bytes
| Description
This is the number of bytes of the Cache Digest to write to disk at a time. It defaults to 4096 bytes (4KB), the Squid default swap page. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
| digest_swapout_chunk_size 4096 bytes
|
|
Tag Name
| digest_rebuild_chunk_percentage
|
Usage
| digest_rebuild_chunk_percentage %(0 to 100)
| Description
This is the percentage of the Cache Digest to be scanned at a time. By default it is set to 10% of the Cache Digest. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
| digest_rebuild_chunk_percentage 10
|
|
Usage
| chroot enable|disable
| Description
Squid by default does not fully drop root privileges because it may be required during reconfigure.So use this directive to have Squid do a chroot() while initializing. This also causes Squid to fully drop root privileges after initializing . Squid only drops all root privilegies when chroot_dir is used. Without chroot_dir it runs as root with effective user nobody. This means, for example, that if you use a HTTP port less than 1024 and try to reconfigure, you will get an error .
|
Tag Name
| client_persistent_connections
|
Usage
| client_persistent_connections on|off
| Description
Persistent connection support for clients and servers. By default, Squid uses persistent connections (when allowed) with its clients and servers. You can use these options to disable persistent connections with clients and/or server.
Related information :
If the browser is talking to web server directly, socket can be closed after it is done using keep-alive directive in apache configuration file. The same thing can be done in Squid using these directives client_persistent_connections and server_persistent_connections.
Default
| client_persistent_connections on
|
|
Tag Name
| pipeline_prefetch
|
Usage
| pipeline_prefetch on|off
| Description
To boost the performance of pipelined requests to closer match that of a non-proxied environment Squid tries to fetch up to two requests in parallell from a pipeline.
Default
| pipeline_prefetch on
|
|
Tag Name
| extension_methods
|
Usage
| extension_methods request method
| Description
Squid only knows about standard HTTP request methods. Unknown methods are denied, unless you add them to this list. You can add up to 20 additional "extension" methods here.
|
Tag Name
| high_response_time_warning
|
Usage
| high_response_time_warning msec
| Description
If the one-minute median response time exceeds this value, Squid prints a WARNING with debug level 0 to get the administrators attention. The value is in milliseconds.
Default
| high_response_time_warning 0
|
|
Tag Name
| high_page_fault_warning
|
Usage
| high_page_fault_warning time-units
| Description
If the one-minute average page fault rate exceeds this value, Squid prints a WARNING with debug level 0 to get the administrators attention. The value is in page faults per second.
Default
| high_page_fault_warning 0
|
|
Tag Name
| high_memory_warning
|
Usage
| high_memory_warning number
| Description
If the memory usage (as determined by mallinfo) exceeds value, Squid prints a WARNING with debug level 0 to get the administrators attention.
Default
| high_memory_warning 0
|
|
Tag Name
| store_dir_select_algorithm
|
Usage
| store_dir_select_algorithm algorithm type
| Description
Squid currently supports two algorithms for selecting cache directories for new objects: least-load and round-robin. Set this to 'round-robin' as an alternative.
Default
| store_dir_select_algorithm least_load
|
|
Description
Microsoft Internet Explorer up until version 5.5 Service Pack 1 has an issue with transparent proxies, where in it is impossible to force a refresh. Turning this on provides a partial fix to the problem, by causing all IMS-REFRESH requests from older IE versions to check the origin server for fresh content. This reduces hit ratio by some amount (~10%), but allows users to actually get fresh content when they want it. Note that because Squid cannot tell if the user is using 5.5 or 5.5SP1, the behavior of 5.5 is unchanged from old versions of Squid (i.e. a forced refresh is impossible).Newer versions of IE will, hopefully, continue to have the new behavior and will be handled based on that assumption. This option defaults to the old Squid behavior, which is better for hit ratios but worse for clients using IE, if they need to be able to force fresh content.
|
http://wiki.squid-cache.org/Features/DelayPools
Feature: Delay Pools
Goal: To provide a way to limit the bandwidth of certain requests based on any list of criteria.
Status: Completed
Version: 2.2+
Developer: David Luyer
Delay Pools
by David Luyer.
To enable delay pools features in Squid configure with --enable-delay-pools before compilation.
Terminology for this FAQ entry:
poola collection of bucket groups as appropriate to a given classbucket groupa group of buckets within a pool, such as the per-host bucket group, the per-network bucket group or the aggregate bucket group (the aggregate bucket group is actually a single bucket)bucketan individual delay bucket represents a traffic allocation which is replenished at a given rate (up to a given limit) and causes traffic to be delayed when emptyclassthe class of a delay pool determines how the delay is applied, ie, whether the different client IPs are treated separately or as a group (or both)class 1a class 1 delay pool contains a single unified bucket which is used for all requests from hosts subject to the poolclass 2a class 2 delay pool contains one unified bucket and 255 buckets, one for each host on an 8-bit network (IPv4 class C)class 3contains 255 buckets for the subnets in a 16-bit network, and individual buckets for every host on these networks (IPv4 class B )class 4as class 3 but in addition have per authenticated user buckets, one per user.class 5
custom class based on tag values returned by external_acl_type helpers in http_access. One bucket per used tag value.
Delay pools allows you to limit traffic for clients or client groups, with various features:
can specify peer hosts which aren't affected by delay pools, ie, local peering or other 'free' traffic (with the no-delay peer option).
- delay behavior is selected by ACLs (low and high priority traffic, staff vs students or student vs authenticated student or so on).
- each group of users has a number of buckets, a bucket has an amount coming into it in a second and a maximum amount it can grow to; when it reaches zero, objects reads are deferred until one of the object's clients has some traffic allowance.
- any number of pools can be configured with a given class and any set of limits within the pools can be disabled, for example you might only want to use the aggregate and per-host bucket groups of class 3, not the per-network one.
This allows options such as creating a number of class 1 delay pools and allowing a certain amount of bandwidth to given object types (by using URL regular expressions or similar), and many other uses I'm sure I haven't even though of beyond the original fair balancing of a relatively small traffic allocation across a large number of users.
There are some limitations of delay pools:
- delay pools are incompatible with slow aborts; quick abort should be set fairly low to prevent objects being retrieved at full speed once there are no clients requesting them (as the traffic allocation is based on the current clients, and when there are no clients attached to the object there is no way to determine the traffic allocation).
- delay pools only limits the actual data transferred and is not inclusive of overheads such as TCP overheads, ICP, DNS, ICMP pings, etc.
- it is possible for one connection or a small number of connections to take all the bandwidth from a given bucket and the other connections to be starved completely, which can be a major problem if there are a number of large objects being transferred and the parameters are set in a way that a few large objects will cause all clients to be starved (potentially fixed by a currently experimental patch).
- in Squid 3.1 the class-based pools do not work yet with IPv6 addressed clients.
- In squid older than 3.1 the delay pool bucket is limited to 32-bits and thus has a rather low MB cap on both bucket content and refill rate. The bucket size is now raised to 64-bit 'unlimited' values, but refill rate remains low.
How can I limit Squid's total bandwidth to, say, 512 Kbps?
delay_pools 1
delay_class 1 1
delay_access 1 allow all
delay_parameters 1 64000/64000 # 512 kbits == 64 kbytes per second
The 1 second buffer (max = restore = 64kbytes/sec) is because a limit is requested, and no responsiveness to a burst is requested. If you want it to be able to respond to a burst, increase the aggregate_max to a larger value, and traffic bursts will be handled. It is recommended that the maximum is at least twice the restore value - if there is only a single object being downloaded, sometimes the download rate will fall below the requested throughput as the bucket is not empty when it comes to be replenished.
How to limit a single connection to 128 Kbps?
You can not limit a single HTTP request's connection speed. You can limit individual hosts to some bandwidth rate. To limit a specific host, define an acl for that host and use the example above. To limit a group of hosts, then you must use a delay pool of class 2 or 3. For example:
acl only128kusers src 192.168.1.0/24
delay_pools 1
delay_class 1 3
delay_access 1 allow only128kusers
delay_access 1 deny all
delay_parameters 1 64000/64000 -1/-1 16000/64000
For an explanation of these tags please see the configuration file.
The above gives a solution where a cache is given a total of 512kbits to operate in, and each IP address gets only 128kbits out of that pool.
How do you personally use delay pools?
We have six local cache peers, all with the options 'proxy-only no-delay' since they are fast machines connected via a fast ethernet and microwave (ATM) network.
For our local access we use a dstdomain ACL, and for delay pool exceptions we use a dst ACL as well since the delay pool ACL processing is done using "fast lookups", which means (among other things) it won't wait for a DNS lookup if it would need one.
Our proxy has two virtual interfaces, one which requires student authentication to connect from machines where a department is not paying for traffic, and one which uses delay pools. Also, users of the main Unix system are allowed to choose slow or fast traffic, but must pay for any traffic they do using the fast cache. Ident lookups are disabled for accesses through the slow cache since they aren't needed. Slow accesses are delayed using a class 3 delay pool to give fairness between departments as well as between users. We recognize users of Lynx on the main host are grouped together in one delay bucket but they are mostly viewing text pages anyway, so this isn't considered a serious problem. If it was we could take those hosts into a class 1 delay pool and give it a larger allocation.
I prefer using a slow restore rate and a large maximum rate to give preference to people who are looking at web pages as their individual bucket fills while they are reading, and those downloading large objects are disadvantaged. This depends on which clients you believe are more important. Also, one individual 8 bit network (a residential college) have paid extra to get more bandwidth.
The relevant parts of my configuration file are (IP addresses, etc, all changed):
# ACL definitions
# Local network definitions, domains a.net, b.net
acl LOCAL-NET dstdomain a.net b.net
# Local network; nets 64 - 127. Also nearby network class A, 10.
acl LOCAL-IP dst 192.168.64.0/18 10.0.0.0/8
# Virtual i/f used for slow access
acl virtual_slowcache myip 192.168.100.13
# All permitted slow access, nets 96 - 127
acl slownets src 192.168.96.0/19
# Special 'fast' slow access, net 123
acl fast_slow src 192.168.123.0/24
# User hosts
acl my_user_hosts src 192.168.100.2/31
# Don't need ident lookups for billing on (free) slow cache
ident_lookup_access allow my_user_hosts !virtual_slowcache
ident_lookup_access deny all
# Security access checks
http_access [...]
# These people get in for slow cache access
http_access allow virtual_slowcache slownets
http_access deny virtual_slowcache
# Access checks for main cache
http_access [...]
# Delay definitions (read config file for clarification)
delay_pools 2
delay_initial_bucket_level 50
delay_class 1 3
delay_access 1 allow virtual_slowcache !LOCAL-NET !LOCAL-IP !fast_slow
delay_access 1 deny all
delay_parameters 1 8192/131072 1024/65536 256/32768
delay_class 2 2
delay_access 2 allow virtual_slowcache !LOCAL-NET !LOCAL-IP fast_slow
delay_access 2 deny all
delay_parameters 2 2048/65536 512/32768
The same code is also used by a some of departments using class 2 delay pools to give them more flexibility in giving different performance to different labs or students.
Where else can I find out about delay pools?
This is also pretty well documented in the configuration file, with examples. Squid install with a squid.conf.documented or squid.conf.default file. If you no longer have a documented config file the latest version is provided on the squid-cache.org website.
delay_parameters
delay_pools
delay_class
delay_access
external_acl_type
http://linux.bigresource.com/Networking-How-to-Apply-Delay-Pools-on-Squid-MbzV59ZTr.html
http://www.squid-cache.org/mail-archive/squid-users/201403/0239.html
http://stackoverflow.com/questions/22504422/how-to-manage-squid-based-on-per-user-user-bandwidth
|
|