Windows.  Viruses.  Notebooks.  Internet.  office.  Utilities.  Drivers

performance, are:
  • frame filtering speed;
  • the speed of promotion of frames;
  • throughput;
  • transmission delay frame.

In addition, there are several switch characteristics that have the greatest impact on these performance characteristics. These include:

  • switching type;
  • the size of the frame buffer(s);
  • switching matrix performance;
  • the performance of the processor or processors;
  • size switching tables.

Filtering rate and frame advance rate

The rate of filtering and frame advancement are the two main performance characteristics of the switch. These characteristics are integral indicators and do not depend on how the switch is technically implemented.

Filtering speed

  • receiving a frame in its buffer;
  • discarding a frame if an error is found in it (the checksum does not match, or the frame is less than 64 bytes or more than 1518 bytes);
  • dropping a frame to avoid loops in the network;
  • dropping a frame in accordance with the filters configured on the port;
  • viewing switching tables to look up the destination port based on the frame's destination MAC address, and discard the frame if the frame's source and destination are connected to the same port.

The filtering speed of almost all switches is non-blocking - the switch manages to drop frames at the rate of their arrival.

Forwarding speed determines the rate at which the switch performs the following frame processing steps:

  • receiving a frame in its buffer;
  • viewing switching tables in order to find the destination port based on the MAC address of the recipient of the frame;
  • frame transmission to the network through the found software switching table port of destination.

Both the filtration rate and the advance rate are usually measured in frames per second. If the characteristics of the switch do not specify for which protocol and for which frame size the values ​​​​of filtering and forwarding rates are given, then by default it is considered that these indicators are given for the Ethernet protocol and frames of the minimum size, that is, frames with a length of 64 bytes (without a preamble) with data field of 46 bytes. The use of the minimum length frames as the main indicator of the switch processing speed is explained by the fact that such frames always create the most difficult operating mode for the switch compared to frames of another format with equal throughput of transmitted user data. Therefore, when testing a switch, the minimum frame length mode is used as the most difficult test, which should check the ability of the switch to work with the worst combination of traffic parameters.

Switch bandwidth (throughput) is measured by the amount of user data (in megabits or gigabits per second) transmitted per unit of time through its ports. Since the switch operates at the link layer, for it the user data is the data that is carried in the data field of the frames of the link layer protocols - Ethernet, Fast Ethernet, etc. The maximum value of the switch throughput is always reached on frames of maximum length, since when In this case, the share of overhead costs for frame overhead is much lower than for frames of the minimum length, and the time for the switch to perform frame processing operations per one byte of user information is significantly less. Therefore, a switch can be blocking for the minimum frame length, but still have very good throughput performance.

Frame transmission delay (forward delay) is measured as the time elapsed from the moment the first byte of the frame arrives at the input port of the switch until the moment this byte appears at its output port. The delay is the sum of the time spent buffering the bytes of the frame, as well as the time spent processing the frame by the switch, namely, viewing switching tables, making a forwarding decision, and gaining access to the egress port environment.

The amount of delay introduced by the switch depends on the switching method used in it. If switching is carried out without buffering, then the delays are usually small and range from 5 to 40 µs, and with full frame buffering - from 50 to 200 µs (for frames of the minimum length).

Switching table size

Maximum capacity switching tables defines the maximum number of MAC addresses that the switch can operate at the same time. IN switching table for each port, both dynamically learned MAC addresses and static MAC addresses that were created by the network administrator can be stored.

The value of the maximum number of MAC addresses that can be stored in switching table, depends on the application of the switch. D-Link switches for workgroups and small offices typically support a 1K to 8K MAC address table. Large workgroup switches support 8K to 16K MAC address tables, while network backbone switches typically support 16K to 64K addresses or more.

Insufficient capacity switching tables can cause the switch to slow down and clog the network with excess traffic. If the switching table is full and the port encounters a new source MAC address in an incoming frame, the switch will not be able to table it. In this case, the response frame to this MAC address will be sent through all ports (except for the source port), i.e. will cause flooding.

Frame buffer size

To provide temporary storage of frames in cases where they cannot be immediately transferred to the output port, the switches, depending on the implemented architecture, are equipped with buffers on the input, output ports or a common buffer for all ports. Buffer size affects both frame delay and packet loss rate. Therefore, the larger the amount of buffer memory, the less likely it is to lose frames.

Typically, switches designed to operate in critical parts of the network have a buffer memory of several tens or hundreds of kilobytes per port. The buffer common to all ports is usually several megabytes in size.

The main technical parameters that can be used to evaluate a switch built using any architecture are filtering speed and forwarding speed.

The filtering rate determines the number of frames per second with which the switch has time to do the following operations:

  • receiving a frame in its buffer;
  • finding the port for the destination address of the frame in the address table;
  • frame destruction (destination port is the same as source port).

The advance rate, by analogy with the previous paragraph, determines the number of frames per second that can be processed using the following algorithm:

  • receiving a frame in your buffer,
  • finding a port for the destination address of the frame;
  • frame transmission to the network through the found (according to the address mapping table) destination port.

By default, it is assumed that these indicators are measured on the Ethernet protocol for frames of the minimum size (64 bytes long). Since the main time is occupied by the analysis of the header, the shorter the transmitted frames, the more serious the load they create on the processor and the switch bus.

Next in importance technical parameters switches will be:

  • bandwidth (throughput);
  • frame transmission delay.
  • the size of the internal address table.
  • the size of the frame buffer(s);
  • switch performance;

Throughput is measured by the amount of data transferred through the ports per unit of time. Naturally, the larger the frame length (more data attached to one header), the greater the throughput should be. So, with a typical "passport" advance rate of 14880 frames per second for such devices, the throughput will be 5.48 Mb / s on packets of 64 bytes, and the data rate limit will be imposed by the switch.

At the same time, when transmitting frames of the maximum length (1500 bytes), the advance rate will be 812 frames per second, and the throughput will be 9.74 Mb / s. In fact, the data transfer limit will be determined by the speed of the Ethernet protocol.

Frame transmission delay means the time elapsed from the moment the frame was written to the buffer of the switch's input port until it appears at its output port. We can say that this is the single frame advance time (buffering, table lookup, filtering or forwarding decision, and getting access to the egress port media).

The amount of delay depends very much on how the frames are advanced. If the on-the-fly switching method is used, then the delays are small and range from 10 µs to 40 µs, while with full buffering - from 50 µs to 200 µs (depending on the frame length).

If the switch (or even one of its ports) is heavily loaded, it turns out that even with on-the-fly switching, most of the incoming frames are forced to be buffered. Therefore, the most complex and expensive models have the ability to automatically change the mechanism of the switch (adaptation) depending on the load and the nature of the traffic.

Size of address table (CAM table). Specifies the maximum number of MAC addresses that are contained in the mapping table of ports and MAC addresses. In technical documentation, it is usually given for one port as the number of addresses, but sometimes it happens that the size of the memory for the table is indicated in kilobytes (one entry takes at least 8 kb, and "replacing" the number is very beneficial for an unscrupulous manufacturer).

For each port, the CAM-mapping table can be different, and when it overflows, the oldest entry is deleted, and the new one is entered into the table. Therefore, if the number of addresses is exceeded, the network can continue to work, but the operation of the switch itself will greatly slow down, and the segments connected to it will be loaded with excess traffic.

Previously, there were models (for example, 3com SuperStack II 1000 Desktop) in which the size of the table allowed storing one or more addresses, because of which you had to be very careful about the design of the network. However, now even the cheapest desktop switches have a table of 2-3K addresses (and backbone even more), and this parameter has ceased to be a technology bottleneck.

Buffer size. It is necessary for the switch to temporarily store data frames in cases where it is not possible to immediately transfer them to the destination port. It is clear that the traffic is uneven, there are always ripples that need to be smoothed out. And the larger the buffer, the more load it can "take on".

Simple switch models have a buffer memory of several hundred kilobytes per port, in more expensive models this value reaches several megabytes.

Switch performance. First of all, it should be noted that the switch is a complex multiport device, and just like that, for each parameter separately, it is impossible to assess its suitability for solving the task. There are a large number of traffic options, with different rates, frame sizes, port distribution, and so on. There is still no common assessment methodology (reference traffic), and various "corporate tests" are used. They are quite complex, and in this book we will have to limit ourselves to general recommendations.

An ideal switch should transmit frames between ports at the same rate as the connected nodes generate them, without loss, and without introducing additional delays. To do this, the internal elements of the switch (port processors, intermodule bus, CPU etc.) must be able to handle incoming traffic.

At the same time, in practice there are many quite objective restrictions on the possibilities of switches. The classic case, when several network nodes interact intensively with one server, will inevitably cause a decrease in real performance due to fixed speed protocol.

Today, manufacturers have fully mastered the production of switches (10/100baseT), even very cheap models have sufficient throughput, and fast enough processors. Problems start when more complex methods of limiting the speed of connected nodes (back pressure), filtering, and other protocols, discussed below, must be applied.

In conclusion, it must be said that the best criterion is still the practice when the switch shows its capabilities in a real network.

Additional features of switches.

As mentioned above, today's switches have so many features that conventional switching (which seemed like a technological miracle ten years ago) is receding into the background. Indeed, models costing from $50 to $5000 can switch frames quickly and with relatively high quality. The difference is in the additional features.

It is clear that the largest number managed switches have additional features. Further in the description, options will be specifically highlighted that usually cannot be correctly implemented on custom switches.

Connecting switches in a stack. This additional option one of the simplest, and widely used in large networks. Its meaning is to connect several devices with a high-speed common bus to increase the performance of the communication node. In this case, options for unified management, monitoring and diagnostics can sometimes be used.

It should be noted that not all vendors use the technology of connecting switches using special ports (stacking). In this area, Gigabit Ethernet lines are becoming more common, or by grouping several (up to 8) ports into one communication channel.

Spanning Tree Protocol (STP). For simple LANs, maintaining the correct Ethernet topology (hierarchical star) during operation is not difficult. But with a large infrastructure, this becomes a serious problem - incorrect crossover (closing a segment into a ring) can lead to a halt in the operation of the entire network or part of it. Moreover, finding the place of the accident may not be easy at all.

On the other hand, such redundant connections are often convenient (many data transport networks are built exactly according to the ring architecture), and can greatly increase reliability - if there is a correct loop processing mechanism.

To solve this problem, the Spanning Tree Protocol (STP) is used, in which the switches automatically create an active tree-like link configuration, finding it using the exchange of service packets (Bridge Protocol Data Unit, BPDU), which are placed in the data field of the Ethernet frame. As a result, looped ports are blocked, but can be automatically turned on if the main link breaks.

Thus, STA technology provides support for redundant links in a network of complex topology, and the possibility of its automatic changes without administrator involvement. This feature is more than useful in large (or distributed) networks, but due to its complexity, it is rarely used in custom switches.

Ways to control the incoming flow. As noted above, if the switch is unevenly loaded, it simply cannot physically pass the data flow through itself at full speed. But simply discarding extra frames for obvious reasons (for example, breaking TCP sessions) is highly undesirable. Therefore, it is necessary to use a mechanism for limiting the intensity of traffic transmitted by the node.

Two ways are possible - aggressive capture of the transmission medium (for example, the switch may not respect the standard time intervals). But this method is only suitable for the "general" transmission medium rarely used in switched Ethernet. The backpressure method has the same drawback, in which dummy frames are transmitted to the node.

Therefore, in practice, the Advanced Flow Control technology (described in the IEEE 802.3x standard) is in demand, the meaning of which is in the transmission of special "pause" frames by the switch to the node.

Traffic filtering. It is often very useful to set switch ports additional conditions frame filtering of incoming or outgoing frames. Thus, it is possible to restrict the access of certain user groups to certain network services using the MAC address, or virtual network tag.

As a rule, filtering conditions are written as Boolean expressions formed using logical operations AND and OR.

Complex filtering requires additional processing power from the switch, and if it is not enough, it can significantly reduce the performance of the device.

The ability to filter is very important for networks where the end users are "commercial" subscribers, whose behavior cannot be regulated by administrative measures. Since they can take unauthorized destructive actions (for example, forge the IP or MAC address of their computer), it is desirable to provide a minimum of opportunities for this.

Switching of the third level (Layer 3). Due to the rapid increase in speed, and wide application switches, today there is a visible gap between the capabilities of switching and classical routing using mainframe computers. In this situation, it is most logical to give the managed switch the ability to analyze frames at the third level (according to the 7-layer OSI model). Such simplified routing makes it possible to significantly increase the speed, more flexibly manage the traffic of a large LAN.

However, in transport data transmission networks, the use of switches is still very limited, although the tendency to erase their differences from routers in terms of capabilities can be traced quite clearly.

Management and monitoring capabilities. Extensive additional features imply advanced and convenient controls. Previously simple devices could be controlled by several buttons through a small digital indicator, or through the console port. But this is already in the past - recently, switches have been released that are managed via a regular 10 / 100baseT port using Telnet, a Web browser, or via the SNMP protocol. If the first two methods are by and large just a convenient continuation of the usual start-up settings, then SNMP allows you to use the switch as a truly versatile tool.

For Etherenet, only its extensions are of interest - RMON and SMON. RMON-I is described below, in addition to it, there is RMON-II (affecting higher OSI layers). Moreover, in "middle-level" switches, as a rule, only RMON groups 1-4 and 9 are implemented.

The principle of operation is as follows: RMON agents on switches send information to a central server, where a special software(for example, HP OpenView) processes information, presenting it in a form convenient for administration.

Moreover, the process can be controlled - by remotely changing the settings, bring the network back to normal. In addition to monitoring and management, using SNMP, you can build a billing system. So far it looks somewhat exotic, but there are already examples of the real use of this mechanism.

The RMON-I MIB standard describes 9 object groups:

  1. Statistics - current accumulated statistics about the characteristics of frames, the number of collisions, erroneous frames (detailed by types of errors), etc.
  2. History - statistical data saved at certain intervals for subsequent analysis of trends in their changes.
  3. Alarms - statistic thresholds above which the RMON agent generates a specific event. The implementation of this group requires the implementation of the Events group - events.
  4. Host - data about network hosts found as a result of analysis of MAC addresses of frames circulating in the network.
  5. Host TopN - a table of N network hosts with the highest values ​​of the given statistical parameters.
  6. Traffic Matrix - statistics about the intensity of traffic between each pair of network hosts, ordered in the form of a matrix.
  7. Filter - packet filtering conditions; packets that meet the given condition can either be captured or can generate events.
  8. Packet Capture - a group of packets captured by specified filtering conditions.
  9. Event - conditions for event registration and event notification.

A more detailed discussion of the capabilities of SNMP will require no less volume than this book, so it will be worthwhile to dwell on this, rather general description this complex but powerful tool.

Virtual networks (Virtual Local-Area Network, VLAN). Perhaps this is the most important (especially for home networks), and widely used feature of modern switches. It should be noted that there are several fundamentally different ways of constructing virtual networks using switches. Due to its great importance for Ethernet-providing, its detailed description of the technology will be done in one of the following chapters.

The brief meaning is to make several virtual (networks independent of each other) on one physical Ethernet LAN by means of switches (2 levels of the OSI model), allowing the central router to manage ports (or groups of ports) on remote switches. Which actually makes VLAN a very convenient means for providing data transfer services (provider).

Key Features of Switches

Switch performance is what network integrators and administrators expect from this device in the first place.

The main indicators of the switch that characterize its performance are:

  1. frame filtering speed;
  2. the speed of promotion of frames;
  3. total throughput;
  4. frame transmission delay.

Filtering speed

Reception of a frame in its buffer;

Viewing the address table in order to select the destination port for the frame;

Destroying a frame because its destination port and source port belong to the same logical segment.

The filtering speed of almost all switches is non-blocking - the switch has time to drop frames at the rate of their arrival.

Forwarding speed determines the rate at which the switch performs the following frame processing steps:

Reception of a frame in its buffer;

lookup of the address table in order to find the port for the destination address of the frame;

· transmission of a frame to the network through the destination port found in the address table.

Both filtration rate and advance rate are usually measured in frames per second. By default, these are Ethernet protocol frames of the minimum length (64 bytes without a preamble). Such frames create the heaviest mode of operation for the switch.

Bandwidth switch is changed by the amount of user data (in megabits per second) transmitted per unit of time through its ports.

The maximum value of the switch throughput is always reached on the frames of maximum length. Therefore, a switch can be blocking for the minimum length frames, but still have very good throughput performance.

Frame Delay is measured as the time elapsed from the moment the first byte of the frame arrives at the input port of the switch until the moment this byte appears at its output port.

The amount of delay introduced by the switch depends on the mode of its operation. If switching is carried out "on the fly", then the delays are usually small and range from 5 to 40 µs, and with full frame buffering - from 50 to 200 µs (for frames of minimum length).

On-the-fly and fully buffered switching

During on-the-fly switching, a part of the frame containing the address of the recipient is received into the input buffer, a decision is made to filter or retransmit the frame to another port, and if the output port is free, then the frame is immediately transferred while the rest of it continues to enter the input buffer . If the output port is busy, then the frame is fully buffered in the input buffer of the receiving port. The disadvantages of this method include the fact that the switch passes erroneous frames for transmission, because when it is possible to analyze the end of the frame, its beginning will already be transferred to another subnet. And this leads to the loss of useful time of the network.


Full buffering of received packets, of course, introduces a large delay in data transmission, but the switch has the ability to fully analyze and, if necessary, convert the received packet.

Table 6.1 lists the features of the switches when operating in two modes.

Table.6.1 Comparative characteristics switches when operating in different modes

The topic of gigabit access is becoming more and more relevant, especially now, when competition is growing, ARPU is falling, and tariffs of even 100 Mbps are no longer surprising. We have long considered the issue of switching to gigabit access. Repulsed by the price of equipment and commercial feasibility. But competitors are not asleep, and when even Rostelecom began to provide tariffs of more than 100 Mbps, we realized that we could not wait any longer. In addition, the price for a gigabit port has significantly decreased and it has become simply unprofitable to install a FastEthernet switch, which in a couple of years will still have to be changed to a gigabit one. Therefore, they began to choose a gigabit switch for use at the access level.

We reviewed various models of gigabit switches and settled on two that are the most suitable in terms of parameters and, at the same time, meet our budget expectations. These are Dlink DGS-1210-28ME and .

Frame


The body of the SNR is made of thick, durable metal, which makes it heavier than the "competitor". The D-link is made of thin steel, which gives it a weight savings. However, it makes it more susceptible to external influences due to its lower strength.

D-link is more compact: its depth is 14 cm, while that of SNR is 23 cm. The SNR power connector is located on the front, which undoubtedly facilitates installation.

Power supplies


D-link power supply


SNR power supply

Despite the fact that the power supplies are very similar, we still found differences. The D-link power supply is made economically, perhaps even too much - there is no lacquer coating on the board, the protection against interference at the input and output is minimal. As a result, according to Dlink, there are concerns that these nuances will affect the switch's sensitivity to power surges, and operation in variable humidity, and in dusty conditions.

Switch board





Both boards are made neatly, there are no complaints about the installation, however, SNR has a better textolite, and the board is made using lead-free soldering technology. This, of course, is not about the fact that SNR contains less lead (than you can't scare anyone in Russia), but that these switches are produced on a more modern line.

In addition, again, as in the case of power supplies, D-link saved on varnish. SNR has a varnish coating on the board.

Apparently, it is implied that the working conditions of D-link access switches should be a priori excellent - clean, dry, cool .. well, like everyone else. ;)

Cooling

Both switches have a passive cooling system. D-link has larger radiators, and this is a definite plus. However, SNR has free space between the board and the back wall, which has a positive effect on heat dissipation. An additional nuance is the presence of heat-removing plates located under the chip, which remove heat to the switch case.

We conducted a small test - we measured the temperature of the heatsink on the chip under normal conditions:

  • The switch is placed on a table at room temperature 22C,
  • 2 SFP modules installed,
  • We are waiting for 8-10 minutes.

The test results were surprising - D-link heated up to 72C, while SNR only reached 63C. What will happen to D-link in a tightly packed box in the summer in the heat, it is better not to think.



Temperature on D-link 72 degrees



On SNR 61 C, flight is normal

lightning protection

The switches are equipped different system lightning protection. D-link uses gas arresters. SNR has varistors. Each of them has its pros and cons. However, the response time of varistors is better, and this provides better protection for the switch itself and subscriber devices connected to it.

Summary

From D-link there is a feeling of economy on all components - on the power supply, board, case. Therefore, in this case gives the impression of a more preferred product for us.

Technical parameters of switches.

The main technical parameters that can be used to evaluate a switch built using any architecture are filtering speed and forwarding speed.

The filtering rate determines the number of frames per second with which the switch has time to do the following operations:

  • receiving a frame in its buffer;
  • finding the port for the destination address of the frame in the address table;
  • frame destruction (destination port is the same as source port).

The advance rate, by analogy with the previous paragraph, determines the number of frames per second that can be processed using the following algorithm:

  • receiving a frame in your buffer,
  • finding a port for the destination address of the frame;
  • frame transmission to the network through the found (according to the address mapping table) destination port.

By default, it is assumed that these indicators are measured on the Ethernet protocol for frames of the minimum size (64 bytes long). Since the main time is occupied by the analysis of the header, the shorter the transmitted frames, the more serious the load they create on the processor and the switch bus.

The next most important technical parameters of the switch will be:

  • bandwidth (throughput);
  • frame transmission delay.
  • the size of the internal address table.
  • the size of the frame buffer(s);
  • switch performance;

Bandwidth is measured by the amount of data transmitted through the ports per unit of time. Naturally, the larger the frame length (more data attached to one header), the greater the throughput should be. So, with a typical "passport" advance rate of 14880 frames per second for such devices, the throughput will be 5.48 Mb / s on packets of 64 bytes, and the data rate limit will be imposed by the switch.

At the same time, when transmitting frames of the maximum length (1500 bytes), the advance rate will be 812 frames per second, and the throughput will be 9.74 Mb / s. In fact, the data transfer limit will be determined by the speed of the Ethernet protocol.

Frame Delay means the time elapsed from the moment the frame was written to the buffer of the input port of the switch until it appeared on its output port. We can say that this is the single frame advance time (buffering, table lookup, filtering or forwarding decision, and getting access to the egress port media).

The amount of delay depends very much on how the frames are advanced. If the on-the-fly switching method is used, then the delays are small and range from 10 µs to 40 µs, while with full buffering - from 50 µs to 200 µs (depending on the frame length).

If the switch (or even one of its ports) is heavily loaded, it turns out that even with on-the-fly switching, most of the incoming frames are forced to be buffered. Therefore, the most complex and expensive models have the ability to automatically change the mechanism of the switch (adaptation) depending on the load and the nature of the traffic.

Size of address table (CAM table). Specifies the maximum number of MAC addresses that are contained in the mapping table of ports and MAC addresses. In technical documentation, it is usually given for one port as the number of addresses, but sometimes it happens that the size of the memory for the table is indicated in kilobytes (one entry takes at least 8 kb, and "replacing" the number is very beneficial for an unscrupulous manufacturer).

For each port, the CAM-mapping table can be different, and when it overflows, the oldest entry is deleted, and the new one is entered into the table. Therefore, if the number of addresses is exceeded, the network can continue to work, but the operation of the switch itself will greatly slow down, and the segments connected to it will be loaded with excess traffic.

Previously, there were models (for example, 3com SuperStack II 1000 Desktop) in which the size of the table allowed storing one or more addresses, because of which you had to be very careful about the design of the network. However, now even the cheapest desktop switches have a table of 2-3K addresses (and backbone even more), and this parameter has ceased to be a technology bottleneck.

Buffer size. It is necessary for the switch to temporarily store data frames in cases where it is not possible to immediately transfer them to the destination port. It is clear that the traffic is uneven, there are always ripples that need to be smoothed out. And the larger the buffer, the more load it can "take on".

Simple switch models have a buffer memory of several hundred kilobytes per port, in more expensive models this value reaches several megabytes.

Switch Performance. First of all, it should be noted that the switch is a complex multiport device, and just like that, for each parameter separately, it is impossible to assess its suitability for solving the task. There are a large number of traffic options, with different rates, frame sizes, port distribution, and so on. There is still no common assessment methodology (reference traffic), and various "corporate tests" are used. They are quite complex, and in this book we will have to limit ourselves to general recommendations.

An ideal switch should transmit frames between ports at the same rate as the connected nodes generate them, without loss, and without introducing additional delays. To do this, the internal elements of the switch (port processors, intermodule bus, CPU, etc.) must cope with the processing of incoming traffic.

At the same time, in practice there are many quite objective restrictions on the possibilities of switches. The classic case, when several network nodes interact intensively with one server, will inevitably cause a decrease in real performance due to the fixed protocol speed.

Today, manufacturers have fully mastered the production of switches (10/100baseT), even very cheap models have sufficient bandwidth and fairly fast processors. Problems start when more complex methods of limiting the speed of connected nodes (back pressure), filtering, and other protocols, discussed below, must be applied.

In conclusion, it must be said that the best criterion is still the practice when the switch shows its capabilities in a real network.

Additional features of switches.

As mentioned above, today's switches have so many features that conventional switching (which seemed like a technological miracle ten years ago) is receding into the background. Indeed, models costing from $50 to $5000 can switch frames quickly and with relatively high quality. The difference is in the additional features.

It is clear that managed switches have the greatest number of additional features. Further in the description, options will be specifically highlighted that usually cannot be correctly implemented on custom switches.

Connecting switches in a stack. This additional option is one of the simplest and most widely used in large networks. Its meaning is to connect several devices with a high-speed common bus to increase the performance of the communication node. In this case, options for unified management, monitoring and diagnostics can sometimes be used.

It should be noted that not all vendors use the technology of connecting switches using special ports (stacking). In this area, Gigabit Ethernet lines are becoming more common, or by grouping several (up to 8) ports into one communication channel.

Spanning Tree Protocol (STP). For simple LANs, maintaining the correct Ethernet topology (hierarchical star) during operation is not difficult. But with a large infrastructure, this becomes a serious problem - incorrect crossover (closing a segment into a ring) can lead to a halt in the operation of the entire network or part of it. Moreover, finding the place of the accident may not be easy at all.

On the other hand, such redundant connections are often convenient (many data transport networks are built exactly according to the ring architecture), and can greatly increase reliability - if there is a correct loop processing mechanism.

To solve this problem, the Spanning Tree Protocol (STP) is used, in which the switches automatically create an active tree-like link configuration, finding it using the exchange of service packets (Bridge Protocol Data Unit, BPDU), which are placed in the data field of the Ethernet frame. As a result, looped ports are blocked, but can be automatically turned on if the main link breaks.

Thus, the STA technology provides support for redundant links in a network of complex topology, and the possibility of its automatic change without the participation of an administrator. This feature is more than useful in large (or distributed) networks, but due to its complexity, it is rarely used in custom switches.

Ways to control the incoming flow. As noted above, if the switch is unevenly loaded, it simply cannot physically pass the data flow through itself at full speed. But simply discarding extra frames for obvious reasons (for example, breaking TCP sessions) is highly undesirable. Therefore, it is necessary to use a mechanism for limiting the intensity of traffic transmitted by the node.

Two ways are possible - aggressive capture of the transmission medium (for example, the switch may not respect the standard time intervals). But this method is only suitable for the "general" transmission medium rarely used in switched Ethernet. The backpressure method has the same drawback, in which dummy frames are transmitted to the node.

Therefore, in practice, the Advanced Flow Control technology (described in the IEEE 802.3x standard) is in demand, the meaning of which is in the transmission of special "pause" frames by the switch to the node.

Traffic filtering. It is often very useful to set additional frame filter conditions on the switch ports for incoming or outgoing frames. Thus, it is possible to restrict the access of certain user groups to certain network services using the MAC address, or virtual network tag.

As a rule, filtering conditions are written as Boolean expressions formed using the logical AND and OR operations.

Complex filtering requires additional processing power from the switch, and if it is not enough, it can significantly reduce the performance of the device.

The ability to filter is very important for networks where the end users are "commercial" subscribers, whose behavior cannot be regulated by administrative measures. Since they can take unauthorized destructive actions (for example, forge the IP or MAC address of their computer), it is desirable to provide a minimum of opportunities for this.

Switching of the third level (Layer 3). Due to the rapid growth of speeds, and the widespread use of switches, today there is a visible gap between the capabilities of switching and classical routing using mainframe computers. In this situation, it is most logical to give the managed switch the ability to analyze frames at the third level (according to the 7-layer OSI model). Such simplified routing makes it possible to significantly increase the speed, more flexibly manage the traffic of a large LAN.

However, in transport data transmission networks, the use of switches is still very limited, although the tendency to erase their differences from routers in terms of capabilities can be traced quite clearly.

Management and monitoring capabilities. Extensive additional features imply advanced and convenient controls. Previously, simple devices could be controlled with a few buttons through a small digital indicator, or through the console port. But this is already in the past - recently, switches have been released that are managed via a regular 10 / 100baseT port using Telnet, a Web browser, or via the SNMP protocol. If the first two methods are by and large just a convenient continuation of the usual start-up settings, then SNMP allows you to use the switch as a truly versatile tool.

For Etherenet, only its extensions are of interest - RMON and SMON. RMON-I is described below, in addition to it, there is RMON-II (affecting higher levels of OSI). Moreover, in "middle-level" switches, as a rule, only RMON groups 1-4 and 9 are implemented.

The principle of operation is as follows: RMON agents on switches send information to a central server, where special software (for example, HP OpenView) processes the information, presenting it in a form convenient for administration.

Moreover, the process can be controlled - by remotely changing the settings, bring the network back to normal. In addition to monitoring and management, using SNMP, you can build a billing system. So far it looks somewhat exotic, but there are already examples of the real use of this mechanism.

The RMON-I MIB standard describes 9 object groups:

  1. Statistics - current accumulated statistics about the characteristics of frames, the number of collisions, erroneous frames (detailed by types of errors), etc.
  2. History - statistical data saved at certain intervals for subsequent analysis of trends in their changes.
  3. Alarms - statistic thresholds above which the RMON agent generates a specific event. The implementation of this group requires the implementation of the Events group - events.
  4. Host - data about network hosts found as a result of analysis of MAC addresses of frames circulating in the network.
  5. Host TopN - a table of N network hosts with the highest values ​​of the given statistical parameters.
  6. Traffic Matrix - statistics about the intensity of traffic between each pair of network hosts, ordered in the form of a matrix.
  7. Filter - packet filtering conditions; packets that meet the given condition can either be captured or can generate events.
  8. Packet Capture - a group of packets captured by specified filtering conditions.
  9. Event - conditions for event registration and event notification.

A more detailed discussion of the capabilities of SNMP would require no less space than this book, so it would be worthwhile to dwell on this very general description of this complex but powerful tool.

Virtual networks (Virtual Local-Area Network, VLAN). Perhaps this is the most important (especially for home networks), and widely used feature of modern switches. It should be noted that there are several fundamentally different ways to build virtual networks using switches. Due to its great importance for Ethernet-providing, its detailed description of the technology will be done in one of the following chapters.

The brief meaning is to make several virtual (networks independent of each other) on one physical Ethernet LAN by means of switches (2 levels of the OSI model), allowing the central router to manage ports (or groups of ports) on remote switches. Which actually makes VLAN a very convenient means for providing data transfer services (provider).

If you notice an error, select a piece of text and press Ctrl + Enter
SHARE: