Windows.  Viruses.  Laptops.  Internet.  Office.  Utilities.  Drivers

Squid is a common solution among programmers, system administrators and computer network enthusiasts for creating and managing an effective proxy server. The program is especially attractive because it is cross-platform. That is, you can install and run it both on Linux and other operating systems corresponding to the Unix architecture, and on Windows. The capabilities of this tool are the most outstanding. How can they be used? Are there any features in setting up the program depending on the specific OS?

General information about Squid

What is Squid? A particularly powerful proxy server, most often used with web clients, is known by this name. With its help, you can organize simultaneous Internet access for several users. Another great feature of Squid is that it can cache various queries. This allows you to receive files faster, since you do not need to re-download them from the Internet. The Squid proxy server can also adjust the speed of the Internet channel based on the actual load.

Squid is adapted for use on Unix platforms. However, there are versions of Squid for Windows and many other popular operating systems. This program, like many operating systems based on the Unix concept, is free. It supports FTP, SSL, and allows you to configure flexible control over access to files. Squid also caches DNS queries. At the same time, you can also configure a transparent Squid proxy, that is, the server operates in a format where the user does not know that he is accessing the Network through it, and not directly. Thus, Squid is a powerful tool in the hands of a system administrator or communications service provider.

Practical usefulness of Squid

When can Squid be most useful? For example, this may be a task in which it is necessary to effectively integrate several computers into a network and provide them with Internet access. The advisability of using a proxy server in in this case is that requests between it and the browser of a specific PC are carried out faster than if the user interacts with the Internet directly. Also, when using Squid, the cache in the browser itself can be completely disabled. This feature is in great demand among users.

Squid Composition

The decision about which we're talking about, consists of several components. This is actually a package software. Its structure includes an application that runs the server, as well as a complementary program for working with DNS. Its interesting feature is that it launches processes, each of which functions independently of the others. This allows you to optimize the server's interaction with DNS.

Program installation

Installing Squid usually does not cause any difficulties. It is very easy to install the program on Linux: just enter the command $ sudo apt-get install squid.

As for Squid for Windows, things are a little more complicated. The fact is that this program doesn't have executable files— the main elements of applications for the OS from Microsoft.

However, installing Squid on Windows is a task that can be solved quite quickly. It is necessary to find on or relevant resources a distribution kit containing files of the .bat type, which are in some way close to traditional Windows executables. After this, you should copy them to a separate folder on the disk. Then you need to start Squid as a system service. After this, the program can be used as a proxy through a PC browser. We can say that this completes the Squid installation.

The proxy server distribution almost always contains a configuration file like .conf. It is the main tool for setting up access to the Internet from the user’s computer and other devices combined into local network when using Squid.

Setting nuances

What nuances might Squid setup include? Windows is an operating system in which work with a proxy server will be carried out by editing configuration files.

In the case of Linux, it can be used for some procedures. But overall in this operating system, as well as if the OS in which Squid is configured is Windows, the squid.conf file is most often used. It specifies certain expressions (“commands”), according to which the server manages connections to the network.

Let's take a closer look at how Squid is configured. The first step is to allow network users to access the server. To do this, you should set the appropriate values ​​in the squid.conf file in http_port, as well as in http_access. It is also useful to create an access control list, or ACL. The http_port settings are important to us, since our task is to prepare Squid only to serve a specific group of computers. In turn, a parameter such as http_access is important, since with its help we can regulate access to specific Network resources requested from certain addresses (other criteria are also possible - protocols, ports and other properties contained in the ACL).

How to set the necessary settings? It's very easy to do.

Let's say we created a computer network with an address range starting with 192.168.0.1 and ending with 192.168.0.254. In this case, the following parameter should be set in the ACL settings: src 192.168.0.0/24. If we need to configure a port, then in the configuration file we need to make an entry http_port 192.168.0.1 (you just need to specify the correct IP address) and enter the port number.

In order to limit access to the proxy created using Squid (not counting computers on the local network), you need to make changes to http_access. This is done simply - using expressions (“commands” - let’s agree to call them that, although, strictly speaking, they are not such in the text, but in the terminal line they would be quite consistent with them) allow LocalNet and deny all. It is very important to place the first parameter above the second, since Squid recognizes them in turn.

Working with ACLs: denying access to sites

Actually, access settings are possible in Squid in a very wide range. Let's look at examples that are useful in the practice of managing local networks.

The src element is quite popular. With its help, you can record the IP address of the computer that made the request to the proxy server. By combining the src element with http_access, you can, for example, allow network access to a specific user, but prohibit similar actions for everyone else. This is done very simply.

We write ACL (user group name) src (interval of IP addresses subject to regulation). The line below is ACL (name of a specific computer) src (IP address of the corresponding PC). After that we work with http_access. We set permission to enter the network for a group of users and an individual PC using the http_access allow commands. In the line below we note that access to the network for other computers is denied with the deny all command.

Setting up a Squid proxy also involves using another useful element provided by the access control system - dst. It allows you to fix the IP address of the server to which the proxy user wants to connect.

Using the element in question, we can, for example, restrict access to a particular subnet. To do this, you can use the command ACL (network designation) dst (subnet IP address), the line below - http_access deny (name of a specific computer on the network).

Another useful element- dstdomain. It will allow us to capture the domain that the user wants to connect to. By using the element in question, we can limit the access of a particular user, for example, to external Internet resources. To do this, you can use the command: ACL (group of sites) dstdomain (site addresses), the line below - http_access deny (computer name on the network).

There are other notable elements in the structure of the access control system. Among these is SitesRegex. Using this expression, you can restrict user access to Internet domains containing a certain word, for example mail (if the task is to prohibit company employees from accessing third-party mail servers). To do this, you can use the ACL SitesRegexMail dstdom_regex mail command, then ACL SitesRegexComNet dstdom_regex \.com$ (this means that access will be denied for the corresponding domain type). The line below is http_accesss deny indicating the computers from which access to external mail servers undesirable.

Some expressions may use the -i switch. Using it, as well as an element such as url_regex, designed to create a template for web addresses, we can deny access to files with a given extension.

For example, using the ACL command NoSwfFromMail url_regex -i mail.*\.swf$ we regulate the ability to access email sites that contain Flash videos. If there is no need to include in access algorithms Domain name site, then you can use the urlpath_regex expression. For example, in the form of the ACL media urlpath_regex -i \.wma$ \.mp3$ command.

Denying access to programs

Setting up Squid allows you to deny user access to certain programs when using proxy server resources. For this purpose, the command ACL (program name) port (port interval) can be used, the line below is http_access deny all (program name).

Leveraging standards and protocols

Setting up Squid also allows the system administrator to set the preferred protocol for using the Internet channel. For example, if there is a need for a person from a specific PC to access the network via the FTP protocol, then you can use the following command: ACL ftpproto proto ftp, the line below - http_access deny (computer name) ftpproto.

Using the method element, we can specify how the HTTP request should be made. There are 2 of them - GET and POST, but in some cases the first is preferable rather than the second, and vice versa. For example, a situation is possible in which a particular employee should not view mail through mail.ru, but his employer will not object if the person wants to read news on the specified site. To do this, the system administrator can use the following command: ACL sitemailru dstdomain .mail.ru, a line below - ACL methodpost method POST, then http_access deny (computer name) methodpost sitemailru.

These are the nuances that Squid configuration includes. Ubuntu is used, Windows or another OS compatible with the proxy server - the features we have considered for setting the necessary parameters are generally characteristic of any software environment functioning of Squid. Working with this software is an incredibly exciting process and at the same time uncomplicated due to the logic and transparency of the main algorithms for setting up the program.

Let's note some key points specific to setting up Squid.

What to pay attention to when setting up?

If you have difficulty finding the squid.conf file, which is the main server configuration tool, you can try checking the etc/squid directory.

It is best if, when working with the file in question, you use the simplest text editor: There is no need for any formatting elements to be included in the lines responsible for configuring the proxy server.

In some cases, it may be necessary to specify the provider's proxy server when working. There is a cache_peer command for this. You need to enter it like this: cache_peer (address of the provider's proxy server).

In some cases it is useful to record the value random access memory, which Squid will use. This can be done using the cache_mem command. It is also useful to specify the directory in which the cached data will be stored, this is done using the cache_dir expression. In the first case, the entire command will look like cache_mem (amount of RAM in bytes), in the second - like cache_dir (directory address, number of megabytes disk space). It is advisable to place the cache on the highest-performance disks if you have a choice.

You may need to specify the computers that have access to the proxy server. This can be done using the commands ACL allowed hosts src (interval of computer IP addresses), as well as ACL localhost src (local address).

If connections use SSL ports, they can also be fixed using the ssl_ports port ACL command. At the same time, you can prohibit the use of the CONNECT method for other ports, except those specified in the SSL secure connection. The expression http_access deny CONNECT will help you do this! SSL_Ports.

Squid and pfSense

In some cases, together with the proxy server in question, the pfSense interface is used, which is used as an effective How to organize them working together? The algorithm for solving this problem is not too complicated.

First, we need to work in the pfSense interface. Squid, which we have already configured, will need to be installed using SSH commands. This is one of the most convenient and safest ways to work with proxy servers. To do this, you need to activate the Enable item in the interface. In order to find it, you need to select the System menu item, then Advanced, then Admin Access.

After this you need to download PuTTY - convenient application for working with SSH. Next, using the console, you need to install Squid. This is easy to do using the -pkg install squid command. After this, you must also install a proxy through the pfSense web interface. Squid (its parameters are not configured at this stage) can be installed by selecting the System menu item, then Packages, then Available Packages. The Squid Stable package should be available in the appropriate window. Let's choose it. Must be set following settings: Proxy Interface: LAN. You can check the box next to the Transparent Proxy line. Select the address for the log and mark Russian as the preferred language. Click Save.

Resource optimization tool

Configuring Squid allows system administrators to efficiently allocate server resources. That is, in this case we are not talking about prohibiting access to any site, however, the intensity of channel use by one or another user or their group may require control. The program in question allows you to solve this task in several ways. Firstly, this is the use of caching mechanisms: due to this, re-downloading files from the Internet will not be required, since the traffic load will be reduced. Secondly, it is a time limit on network access. Thirdly, this is the establishment of limit values ​​for the speed of data exchange on the network in relation to the actions of certain users or specific types of downloaded files. Let us consider these mechanisms in more detail.

Optimizing network resources through caching

The structure of network traffic contains many types of files that are used unchanged. That is, once downloaded to the computer, the user does not have to repeat the corresponding operation. The Squid program allows you to flexible setup a mechanism for recognizing such files by the server.

A fairly useful option of the proxy server we are researching is checking the age of the file in the cache. Objects that have resided in the corresponding memory area for too long should be updated. This option can be enabled by using the refresh_pattern command. So, the full expression can look like refresh_pattern (minimum duration of time - in minutes, maximum share of “fresh” files - in %, maximum period). Accordingly, if a file is in the cache longer than the established criteria, then it may be necessary to download a new version of it.

Optimizing resources through time-based access restrictions

Another option that can be used thanks to the capabilities of Squid-Proxy is to limit user access to network resources over time. It is installed using a very simple command: ACL (computer name) time (day, hour, minute). Access can be restricted for any day of the week by replacing “day” with the first letter of the word corresponding to its name in the English alphabet. For example, if it is Monday, then M, if Tuesday, then T. If the command does not contain the word “day,” then the corresponding ban will be set for the entire week. Interestingly, it is also possible to regulate the schedule of access to the network carried out by users using certain programs.

Optimizing resources through rate limiting

A fairly common option is to optimize resources by regulating the permissible data exchange rate within computer network. The proxy server we are studying is - the most convenient tool to solve this problem. Regulation of the speed of data exchange in the network is carried out using parameters such as delay_class, delay_parameters, delay_access, as well as through the delay_pools element. All four components are of great importance for solving the challenges facing system administrators in terms of optimizing local network resources.

The problem is the following. At the AD enterprise, access to the Internet through a proxy, there is a DMZ. The proxy and its port are registered in the browser settings, in the _connections_ section. To access the DMZ, ISA 2006 is used (I have Isa client 2004). We access the Internet through squid.

After Windows installations 7 (on a clean machine) I configured IExplorer as usual, registered a proxy and port, but IExplorer refuses to access the Internet (accordingly, the ability to activate and receive updates falls off). When starting, it asks for a login and password, but in the end I get a page with the following message:
==============================
ERROR: Cache access denied.

ERROR

Cache access denied

The following error occurred:

  • Cache access denied

Sorry, you cannot request:

Http://go.microsoft.com/fwlink/? from this cache until you pass authentication.

To do this you need Netscape version 2.0 or higher, or Microsoft Internet Explorer 3.0, or HTTP/1.1 compatible browser. Please contact the cache administrator if you have problems with authentication, or change your default password.


Generated Thu, 18 Jun 2009 04:11:33 GMT by apk-proxy2.apk.gov (squid/2.6.STABLE18) ======================= =======

There is no such problem in FireFox. I specified a proxy in the settings, entered my username and password, and here I sit, writing this letter.
On XP and Vista everything is OK too .

I installed the following assemblies 7100(rus), 7201, now it is 7231(rus) - the situation does not change.

The question is, what and where to tighten?

Many system administrators are faced with the issue of limiting user access to certain Internet resources. In most cases, resource-intensive and complex content filtering is not necessary; URL lists are sufficient. It is quite possible to implement this method using the squid proxy server without involving third-party software.

The method of "black" and "white" lists is ideal for restricting access to resources whose addresses are known in advance, but for some reason are undesirable, for example social media. Compared to content filtering, this method has many disadvantages, but on the other hand, it is much simpler to implement and requires much less computing resources.

Efficiency this method should be considered from the point of view of the task at hand, so if you need to block social networks and a number of entertainment resources for employees on which they spend the most time, then filtering by URL lists can completely solve this problem. At the same time, such filtering will be ineffective if you need to limit access to any resources of certain content.

In what follows we will assume that the reader has basic skills Linux administration. We also remind you that all the commands below should be executed as the superuser.

First of all, let's create a list file. It can be located anywhere, but it would be logical to place it in the squid configuration directory - /etc/squid(or /etc/squid3 if you are using squid3)

Touch /etc/squid/blacklist

and let's start filling it out. When specifying a URL, you should use RegExp syntax; we will not dwell on this issue in detail, since it is beyond the scope of the article; you can read more about the RegExp rules. For example, let’s block popular social networks:

Vk\.com
odnoklassniki\.ru

Please note that the dot in RegExp is a service character and therefore must be escaped with \ (backslash).

IN configuration file squid ( /etc/squid/squid.conf) let's create an acl list, which will include hosts or users for which filtering will be performed.

Acl url_filtred src 10.0.0.100-10.0.0.199

In our case, filtering is enabled for all hosts in the address range 10.0.0.100-199, i.e. we will filter the Internet only for a certain group of users.

Then we connect our list:

Acl blacklist url_regex -i "/etc/squid/blacklist"

The -i switch indicates that the list is case insensitive.

Now let's go to the rules section and before the rule

Http_access allow localnet

Http_access deny blacklist url_filtred

Let us draw your attention once again to the fact that all rules in squid are processed sequentially, until the first occurrence, so if we place more general rule before a more private one, it will not work. The same is true for overlapping rules - the first one will work.

Save the changes and restart squid:

Service squid restart

Let's try to visit a site from the list, if everything is done correctly, you will see a squid message about denying access to this resource.

The reason for posting this article is as follows: very often on forums I see TCs giving examples of their configs with a monstrous jumble acl and http_access directives, especially in topics dealing with problems with access to squid. When setting up any system, I try to adhere to some logical order. Today I will try to convey to you what order http_access I stick to it.

The main idea lies in the phrase taken from the Squid manual for http_access:

If none of the "access" lines cause a match, the default is the opposite of the last line in the list. If the last line was deny, the default is allow. Conversely, if the last line is allow, the default will be deny. For these reasons, it is a good idea to have an "deny all" entry at the end of your access lists to avoid potential confusion.

Literally the following:

If the "access" rule is not found, then by default the rule opposite to the last one in the list will be applied. If the last rule is deny, then allow will be applied by default. On the contrary, if the last line is allow, the default will be deny. Therefore, it is a good idea to have a "deny all" rule at the end of your list of rules to avoid any possible misunderstandings.

I have already discussed some tips and rules in the article. Now I will try to convey my vision order acl and http_access. Actually, order of acl directives makes absolutely no difference. The main thing that acl were defined BEFORE the http_access rules(I didn't find any confirmation of this in the manuals, but I had some problems with access when placing the acl after http_access). I try to group acls into similar types.

Setting up http_access in squid

Exactly order of placement of http_access directives determines whether the client will access the cache or not. The algorithm for processing a cache request is as follows:

  1. Each request is sequentially compared with each line of the http_access rules.
  2. Until the request matches one of the access or deny rules, it will follow from rule to rule and check its credentials.
    Everything is simple here. Let's look at the following rules:
  3. If more than one acl is specified in http_access, then logical AND is applied. Example: http_access allow|deny acl AND acl AND... OR http_access allow|deny acl AND acl AND... OR
  4. If the request does not fall under any rule, then by default, squid uses the rule opposite to the last one. # we have a single allowing rule for a certain user acl: http_access allow user # if, when accessing squid, the request did not fall into this acl, then the deny action will be applied to it. # And if we have two rules http_access allow user http_access deny user2 # and the request is not included in either acl user or acl user2, then allow will be applied to it. # That is, the opposite action to the last one http_access deny user2

These are the basics. But there are some nuances revealed during operation.

Tuning http_access)

We know that there are acls that use external . Or other modules (for example, srcdomain And srcdom_regex requiring resolution). A logical theoretical conclusion can be made that acl data may work somewhat slower than acl src or dst or similar. According to the wiki, developers divide fast/slow acl into the following groups (since squid version 3.1.0.7):

Accordingly, if we indicate the prohibition rules first, especially if they are “fast” (for example, such

Http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports

) then they process very quickly, reducing the load due to the fact that fewer requests reach the rules that use authentication, those that really need it. So, try to place denying http_access rules using acl without external modules as high as possible, then place allowing rules. Well, don’t forget to specify deny all (to avoid potential confusion).

Troubleshooting acl and http_access

If there are still some problems when configuring the http_access directive and the desired user does not get access, you can try using the option. Example:

Debug_options ALL,1 33,2

After reconfiguring squid, messages about allowing/denying access for each session and the acl name according to which access is granted or not will begin to appear in cache.log. If this information is not enough, then you can include more detail:

Debug_options ALL,1 33.2 28.9

Summary

That's all for today. Today we looked at the issue of setting up acl and http_access. The logic for processing http_access directives and got acquainted with some recommendations. Perhaps one of the readers will correct me, for which I will be very grateful. In addition to this article, I recommend that you read http://wiki.squid-cache.org/SquidFaq/SquidAcl.

Best regards, McSim!

If you notice an error, select a piece of text and press Ctrl+Enter
SHARE: