Network switch

In computer networks as a switch ( from the English for " switch ", " switch " or " switch " ) is - even network switch or distribution called - called a coupling element that connects network segments together. It provides within a segment ( broadcast domain ) that data frames reach their destination (hereinafter referred to as a frame ). In contrast to an at first glance very similar repeater hub Frames are not simply forwarded to all other ports, but only to that to which the target device is connected - a switch makes a forwarding decision based on the automatically learned hardware addresses of the connected devices.

The term switch refers generally to a multi- port bridge - forwards on the basis of information from the Data Link Layer ( Layer 2) of the OSI model an active network device, the frames. Sometimes, the more correct term bridging hub or switching hub can be used. The first " EtherSwitch " was introduced by Kalpana in 1990. "Switch" is actually taken from the circuit-switching technology and is not the function of the device again.

Switches that process the additional data at the network layer ( layer 3 and above ) are often referred to as Layer 3 switches or multilayer switches. The switch device similar to the network layer 1 (Layer 1 ) is referred to as (repeater ) stroke. In addition to Ethernet switches, there are Fibre Channel ( FC ) switches. FC defines a non- routable protocol standard in the field of storage networks, designed as the successor to SCSI for high-speed transmission of large amounts of data.

  • 2.1 Source Address Table
  • 2.2 Different ways of working
  • 2.3 Several switches in a network

Features and Functions

Simple Switches operate exclusively on layer 2 (data link layer ) of the OSI model. The switch processed upon receipt of a frame the 48-bit MAC address (for example, 08:00:20: ae: fd: 7e) and submit to an entry in the Source Address Table ( SAT), in the in addition to the MAC address and the physical port to which it has been received, it is stored. In contrast to stroke frames are then forwarded only to the port that is listed for the corresponding destination address in the SAT. Is the path to the destination address unknown (learning phase), the switch forwards the frame of interest to all other active ports. One difference between bridge and switch is the number of ports or the port density: Bridges typically have only two ports, rarely three or more switches, however, have as individual units usually between four ( at SOHO installations), 12 (for commercial installations ) or 48 and more send ( in data centers or large building installations) ports and can function at all ports independently at the same time and receive (non -blocking ). Another possible difference to bridges is that some types of switches control the cut-through technology and other enhancements (see below). This reduces the latency, ie the delay from sending a request and the arrival of the answer. Switches can also handle broadcasts; they are forwarded to all ports. With a few exceptions applies: A switch is a bridge, but not every bridge is a switch. An exception bridges that can connect different protocols such as Token Ring and Ethernet ( MAC bridge or LLC - Bridge ). Such functionality is not reproduced in switches.

For the devices connected to a switch behaves transparent ( almost invisible). When communication takes place mainly between devices within a segment, the number of circulating frames is drastically reduced in the other segments through the use of a switch. Must be a switch to forward frames but in other segments, its use is more likely to delay in communication (latency). In case of overload the capacity of a segment or too little buffer in the switch also dropped frames may be necessary. This is balanced mostly by the protocols in higher layers, such as TCP.

Layer 2 and Layer 3 switches

We distinguish between Layer 2 and Layer 3 switches or higher. Layer 2 devices are simpler models. You only have basic functions, and normally do not speak management functions (but are plug -and-play capability), or only with a limited functionality such as port blocking or statistics. Professional Layer 3 or higher switches have a rule management features; In addition to the basic switch features they have an additional control and monitoring functions, which may be based as well on Layer 2 information from higher layers, such as IP filtering, prioritization for Quality of Service, Routing. Unlike a router with a Layer 3 switch, the forwarding decision is made in the hardware and therefore faster and with lower latency.

Management

The configuration or control a switch with management functions is done according to the manufacturer through a command line ( via Telnet or SSH), a web interface, a special control software or a combination of these. In the current, non - managed ( plug- and-play ) switches the higher-order devices also control functions such as VLAN tagged or priority and avoid yet on a console or other management interface.

Operation

In the following, unless otherwise indicated, starting from Layer 2 switches. The individual ports of a switch can independently receive and send data from one another. These are either an internal high speed bus ( backplane switch) or cross-connected with each other ( matrix switch). Data buffer to ensure that wherever possible lost frames.

Source Address Table

A switch does not need to be configured in the rule. Receiving a frame after the switching, it stores the MAC address of the transmitter and the corresponding interface of the SAT.

If the destination address in the satellite is found, the receiver is in the segment which is connected to the corresponding interface. The frame is then forwarded to that interface. Are reception and target segments are identical, the frame must not be forwarded because the communication without a switch can take place in the segment itself. If the destination address is not (yet) in the SAT, the frame to all other interfaces must be forwarded. In an IPv4 network the SAT entry is usually already made ​​during the ARP address requests anyway necessary. First, an association of the source MAC address is available from the ARP request address from said response frame is then obtained, the receiver 's MAC address. Since this is the ARP requests to broadcasts, and the answers always go to already learned MAC addresses, no unnecessary traffic is generated. Broadcast addresses are never added to the SAT, and therefore always forwarded to all segments. Frames sent to multicast addresses are processed by simple devices such broadcasts. More sophisticated switches often know how to use multicasts and send multicast frames then only to the registered multicast address recipient.

So in a way the switches learn MAC addresses of the devices in the connected segments automatically.

Different ways of working

An Ethernet frame contains the destination address after the so-called data preamble in the first 48 bits (6 bytes). With the forwarding to the target segment can thus already be started after receiving the first six bytes, while the frame is still being received. A frame is between 64 and 1518 bytes long, is in the last four bytes to the detection of erroneous frames, a CRC checksum (cyclic redundancy check ). A data error in the frames can be detected only after the entire frame has been read.

Depending on the requirements on the delay time and error detection can therefore operate switches in different ways:

  • Cut -through
  • Store- and-Forward - The safest, but also the slowest switch method with the largest latency is controlled by each switch. The switch initially receives the entire frame ( saves it; "Store" ), calculates the checksum of the frame and then makes its forwarding decision based on the destination MAC address. Should differences between the calculated checksum and the stored at the end of the frame CRC value result, the frame is discarded. In this way, no erroneous frames spread in the local network. Store- and-Forward was long the only possible mode of operation, when the transmitter and receiver were working with different transmission speeds or duplex modes or used different transmission media. The latency in bits here is identical with the total packet length - for Ethernet, Fast Ethernet and Gigabit Ethernet in full-duplex mode, at least 576 bits, upper limit is the maximum packet size ( 12,208 bits) - plus the response time of the switch. Today there are also switches that dominate a cut-and- Store- hybrid mode, which reduces the latency even when switches from fast to slow.
  • Error-Free-Cut-Through/Adaptive Switching - A blend of several of the methods above also generally implemented only from more switches. The switch first operates in the " Cut -through" mode and sends the frame to the correct port on the LAN. However, it will maintain a copy of the frame in the memory, over which then a checksum is calculated. If they do not match the data stored in the frame CRC value, so the switch can the defective frame, although not directly indicate that he is wrong, but he can count up an internal counter with the error rate per unit time. If too many errors occur in a short time, the switch falls back into the store-and- forward mode. Decreases the error rate again deep enough, the switch switches to the cut-through mode. He can temporarily switch to the fragment - free mode Also, if too many fragments with less than 64 bytes arrive length. Possess transmitter and receiver different transmission speeds or duplex modes or they use other transmission media (fiber to copper), so also must be geswitcht with Store- and-forward technology.

Today's networks distinguish two architectures: the symmetric and asymmetric switching according to the uniformity of the speed of the connection ports. In the case of an asymmetric Switchings, that is, if transmit and receive ports have different speeds, the store-and- forward principle is applied. With a symmetrical switching, ie the coupling of the same Ethernet speeds, will proceed according to the cut-through concept.

In the early days of switching technology, there were two versions of the port and segment switching. This differentiation plays in modern networking only a minor role, since all commercially available switches segment switching on all ports dominate.

  • A port switch features for each port by only a SAT entry for a MAC address. At such a connection therefore may only end devices ( servers, routers, workstations ) and no more segments, so no bridges, hubs or switches ( behind which there are multiple MAC addresses ) can be connected. See MAC flooding. In addition, there was often an uplink port for which this restriction did not apply. This port had often no SAT, but was simply used for all MAC addresses that were not assigned to another local port. Such switches were working normally after the cut-through method. Despite these seemingly adverse limitations also existed Benefits: The switches came with very little memory (at lower costs) and due to the minimum size of the SAT was also the switching decision can be made very quickly.
  • All newer switches are segment switches on each port and can manage many MAC addresses, that is, to connect more network segments. There are two different satellite configurations: either each port has its own table, for example, max. 250 addresses, or there is a common SAT for all ports - for example, with a maximum of 2000 entries. Caution: Some manufacturers specify 2000 address entries, but my 8 ports with a maximum of 250 entries per port.

Multiple switches in a network

Older switches the connection of multiple devices must usually be carried out either via a special uplink port or via a cross-over cable (crossover cable), newer switches and all gigabit Ethernet switches control Auto MDI (X), so that these are coupled together without special cable can be. Often, but not necessarily uplink ports are realized in a more rapid or higher ( Ethernet ) technology as the other ports (e.g., gigabit Ethernet instead of Fast Ethernet or fiber-optic cable instead of copper twisted pair cables). Unlike hubs can be connected to each other almost any number of switches. The upper limit has nothing to do with a maximum cable length, but depends on the size of the address table ( SAT). In current devices are often entry-level 500 entries (or more ) possible, which limits the maximum number of nodes ( ~ computers ) on these same 500 If several switches are used, thus limiting the device with the lowest SAT the maximum number of nodes. High quality equipment can easily deal with thousands of addresses. If the maximum number is exceeded, the same thing happens as with the MAC flooding, hence the performance breaks down drastically.

To increase the reliability connections can be made redundant. Here, the dual transport frames and switching loops through the previous execution of the Spanning Tree Protocol (STP) will be prevented. Another way to make a network with redundant loops and simultaneously to improve the performance, the meshing. Here arbitrary loops may be formed between meshingfähigen devices; to increase performance all loops (no spanning tree is formed ) can then for unicast traffic (similar to Trunking) (also part loops) continue to be used. Multicast and broadcast must be treated separately from the Meshing switch and may continue to be sent only on one of the available mesh connections.

When connected in a network switch without further precautions with itself or multiple switches cyclically connected together in a loop, it is called a switching loop. Through endless doubling and data packets from circulating such a faulty networking usually leads to total failure of the network.

A better use of dual designed compounds is the port bundling (English: trunking, bonding, Etherchannel - depending on the manufacturer ), allowing up to eight similar compounds may be connected in parallel to increase the speed. This technology professional control switches that can be connected to switch-to- server, but in this way with each other, from switch to switch, or. The standard has now been adopted ( IEEE 802.1AX - 2008), only remains the interconnection of switches from different manufacturers problematic.

Stacking is in the switching environment, a technique by which of several independent, stacking - capable switches, a common logical switch with a higher number of ports and a common management is configured. Stacking -capable switches have special ports, known as stacking ports, which usually operate with very high transmission rate and low latency. When stacking, the switches that must be from the same manufacturer and in the same model, as a rule, connected together with a special stack cable. A stacking connection is usually the fastest connection between multiple switches and transfers in addition to data also management information. Such interfaces may well be more expensive than standard high-speed ports that can be used as uplinks of course also; Uplinks are always possible, but: not all switches support stacking.

Architectures

The core of a switch is the switch fabric by which the frames are transferred from the input to the output port. The switching fabric is implemented entirely in hardware to ensure low latency and high throughput. In addition to the pure processing task, it collects statistical data such as the number of transferred frames (frame ) throughput or error. The switching activity can be performed in three ways:

  • Shared Memory Switching: This concept is modeled on the idea that computer and switch work in a similar way. You receive data via input interface, edit it and pass it on output ports. Similarly, a received frame indicates to the switch processor via an interrupt his arrival. The processor extracts the destination address, looks up the appropriate output port and copies the frame in the buffer. As a result, there is a velocity estimate from the consideration that if N frame / s can be read in and out in and out of the memory, the placement rate N / 2 frame / s can not exceed.
  • Bus Switching: In this approach, the receiving port to transmit a frame without the intervention of the processor via a common bus to the output port. Forms the bottleneck of the bus can be transferred via only one frame at a time. A frame that arrives at the input port and encounters the bus busy, is therefore placed in the queue of the input ports. Since each frame of the bus must cross separately, the switching speed is limited to the bus throughput.
  • Matrix Switching: The matrix principle is one way pick up the throughput limitation of the shared bus. A switch of this type is composed of a switch network that connects the N input - output ports with N on 2N lines. A frame that arrives at an input port is transferred to the horizontal bus until it intersects with the vertical bus that leads to the desired output port. If this line is blocked by the transmission of another frame, the frame must be placed in the queue of the input port.

Benefits

Switches have the following advantages:

  • When two network devices transmit at the same time, there is no data collision (see CSMA / CD), since the switch can transmit simultaneously internally via the backplane both programs. If on a port, the data arrive faster than they can be sent over the network, the data is buffered. If possible flow control is used to prompt the / the channels at a slower sending the data. If you have connected 8 computers over an 8 -port switch and two send each other at full speed data, so four full-duplex connections will be established, one has calculated the 8x the speed of a respective hubs, in which all devices divide the maximum bandwidth. Namely, 4 x 200 Mbit / s as opposed to 100 Mbit / s However, two aspects speak against this bill: first, the internal processors are not always designed especially in the low -cost segment on all ports at full speed to use, on the other hand also a hub with several computers will never achieve 100 Mbit / s, because the more collisions occur, the more the network is busy, what the usable bandwidth throttles again. Depending on the manufacturer and model of the actual achievable throughput rates are more or less significantly below the theoretically achievable 100 %, at affordable low-cost devices data rates between 60 % and 90 % are quite common.
  • The switch records in a table which station can be reached on which port. For this purpose, the source MAC addresses of the feed-through frames are stored in the current operation. This allows data to be forwarded only to the port on which the receiver is actually, which espionage is prevented by use of the promiscuous mode of the network card, as was still possible for networks with hubs. Treated frames with (yet) unknown destination MAC address as broadcasts and forwarded to all ports except the source port.
  • The full-duplex mode can be used, so that at a port at the same time can be transmitted and received data, whereby the transmission rate is doubled. Because collisions are not possible in this case, the physical transmission rate is better utilized.
  • At each port can be negotiated independently of the speed and duplex mode.
  • Two or more physical ports into one logical port can (HP: Bundling, Cisco: Etherchannel ) are combined in order to increase the bandwidth; This can be done by means of static or dynamic method such as LACP or PAgP.
  • A physical switch with VLANs can be divided into multiple logical switches. VLANs can be spanned across multiple switches (IEEE 802.1q ).

Disadvantages

  • A disadvantage of switches is that debugging in such a network is more difficult under certain circumstances. Frames are no longer visible on all strands in the network, but ideally only on those who actually lead to the goal. In order to enable the administrator still the observation of network traffic, dominate some port mirroring. The administrator shall notify the ( manageable ) switch with which ports he wants to watch. The switch then sends copies of frames of the observed ports for a selected port, where they can be recorded, for example, by a sniffer. To standardize the port mirroring, the SMON protocol was developed, which is described in RFC 2613.
  • Another disadvantage is the latency that is higher in switches ( 100BaseTX 5-20 microseconds ) than the stroke ( 100BaseTX < 0.7 microseconds ). Since it anyway is the CSMA method no guaranteed access times, and it is differences in the millionths of a second range is ( microseconds, not milliseconds ), this has rarely in practice meaning. Where an incoming signal is simply passed on to all network devices in a hub, the switch has to find the correct output port only based on its MAC address table; Although this saves bandwidth but takes time. However, in practice, the switch in advantage as the absolute latencies in a network ungeswitchten easily exceed the latency of a full-duplex (almost collision -less ) switches due to the inevitable collisions of already lightly loaded network. ( The highest speed is achieved with neither hubs nor switches, but by employing crossover cable to connect two network devices directly to each other. , This method is limited, however, the case of computers, each with a network card, the number of network participants in 2 )
  • Switches are star distributor with a star network topology, and teach Ethernet ( without port bundling, STP or meshing ) with no redundancy. If one switch, the communication between all participants in the ( sub) network is interrupted. The switch is then the single point of failure. Remedy the port bundling ( FailOver ), where each computer has at least two LAN cards and is connected to two switches. To the port bundling with FailOver but you need LAN cards and switches with appropriate software ( firmware).

Security

In the classical Ethernet with thin or Thickwire exactly as in networks using hubs, the interception of all network traffic was still relatively easy. Switches were initially as much safer. However, there are methods to set up capture even in switched networks the traffic of other people without the switch cooperates:

  • MAC flooding - The space in which the switch learns the hanging at each port MAC addresses is limited. This makes you look at the MAC flooding advantage by overloading the switch with fake MAC addresses until its memory is full. In this case, the switch switches to a Failopen mode, where he again behaves like a hub and forwards all frames to all ports. Different manufacturers have - again almost exclusively in switches mid to high price range - implemented protective measures against MAC flooding. As a further safety measure can be applied in most managed switches for a port a list of authorized source MAC addresses. Protocol data units ( here: frames) with non-authorized source MAC address are not forwarded, and can cause the shutdown of the relevant ports (Port Security).
  • MAC spoofing - where the attacker sends frames with a foreign MAC address as the sender. Thus their entry is overwritten in the Source-Address -Table, and the switch sends all traffic to this MAC in the following to the switch port of the attacker. Remedy as in the above case by fixed allocation of the MAC to the switch ports.
  • ARP spoofing - Here, the attacker makes a weakness in the design of the ARP advantage, which is used to resolve IP addresses to Ethernet addresses. A computer that wants to send a frame via Ethernet must know the destination MAC address. This is requested by ARP ( ARP request broadcast). Responds to the attacker, with its own MAC address to the IP inquired (not its own IP, hence the name spoofing ) and is faster than the actual owner of the IP, so will send his frames to the attacker, who they now read the victim and, where appropriate, may refer to the original destination. This is not due to an error on the switch. A Layer 2 switch knows no higher protocols as Ethernet and takes a decision to forward only on the basis of MAC addresses. A Layer 3 switch must be if it is to be auto -configuring, rely on the mitgelesenen of him ARP messages and therefore also learns the fake address, but you can configure a managed Layer 3 switch so that the assignment of switch port be influenced to IP address and not ARP.

Parameters

  • Can be specifies how many frames per second read, edited and forwarded: Forwarding rate ( Durchleitrate )
  • Filtration rate ( filtration rate ): number of frames that are processed per second
  • Manageable number of MAC addresses (structure and max. Magnitude of the Source-Address - Table)
  • Backplane throughput (Switching Fabric ): capacity of the buses (also Crossbar ) within the switch
  • VLAN capability or flow control.
  • Management options such as error control and signaling, port-based VLAN, tagged VLAN, VLAN uplinks, Link Aggregation, Meshing, Spanning Tree Protocol ( spanning tree formation), bandwidth management, etc.

History

The development of Ethernet switches began in the late 1980s. With better hardware and various applications with high bandwidth requirements 10 -Mbit networks were both in the data center operations as well as campus networks now quickly reach their limits. In order to obtain a more efficient network traffic, started to segment networks via routers and subnetworks. Although the reduced collisions and increased efficiency, but also increased the complexity of networks and increased the installation and administration costs to a significant extent. The former Bridges were no real alternatives, since they only had a few ports (usually two ) and slowly worked - the data throughput was relatively low and the latency is too high. Here lies the birth of the first switch: The first commercially available model had seven 10 -Mbps Ethernet ports and (later acquired by Cisco ) 1990 by the U.S. start -up company Kalpana offered. The switch had a higher data throughput than Cisco's high-end routers and was much cheaper. In addition accounted for restructuring: He could be placed easily and transparently in the existing network. This really came into their switched networks. Soon after Kalpana developed the port trunking Etherchannel method, which allows to increase the data throughput to bundle multiple ports and to jointly use as an uplink or backbone. Mid-1990 reached Fast Ethernet switches (non -blocking, full-duplex ) market. For Gigabit Ethernet repeater hubs were indeed not yet defined in the standard, but there are effectively no. In 10 - Gigabit networks no more Hubs are defined - everything will be switched. Today, segments with several thousand computers - without additional routers - simply and efficiently connected to switches. Switches are used in business or private networks as well as in temporary networks such as LAN parties.

497432
de