Local Networks and the Internet - Laurent Toutain - E-Book

Local Networks and the Internet E-Book

Laurent Toutain

0,0
201,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Local Networks and the Internet: From Protocols to Interconnection

This title covers the most commonly used elements of Internet and Intranet technology and their development. It details the latest developments in research and covers new themes such as IP6, MPLS, and IS-IS routing, as well as explaining the function of standardization committees such as IETF, IEEE, and UIT. The book is illustrated with numerous examples and applications which will help the reader to place protocols in their proper context.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 776

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Chapter 1. Introduction

1.1. Why a network?

1.2. Network classification

1.3. Interconnection networks

1.4. Examples of network utilization

1.5. The Internet network

1.6. Structure of this book

Chapter 2. Standardization and Wiring

2.1. The IEEE 802 committee

2.2. The standards

2.3. IEEE 802.1 addressing

2.4. Cabling rules

Chapter 3. Ethernet and IEEE 802.3 Protocols.

3.1. History

3.2. Physical level

3.3. The fundamentals of CSMA/CD

3.4. Frame format

3.5. The 10BASE5 network

3.6. Devices for the 10BASE2

3.7. Twisted pair equipment

3.8. Fiber optics

3.9. Examples of Ethernet frames

3.10. Evolution of the Ethernet

Chapter 4. The LLC and SNAP Sublayers

4.1. Definition

4.2. LLC frames

4.3. Example

4.4. The SNAP layer

Chapter 5. Interconnection by Bridges: The Spanning Tree Algorithm

5.1. Introduction

5.2. Transparent filtering bridges

5.3. Spanning tree algorithm

Chapter 6. Internet

6.1. The Internet players

Chapter 7. IP Protocols

7.1. Implementation of the TCP/IP protocols

7.2. Internet addressing

7.3. The IPv4 protocol (RFC 791, RFC 1122)

7.4. The ICMP (Internet Control Message Protocol) (RFC 792)

7.5. The IPv6 protocol

7.6. Tunnels

7.7. Configurations

7.8. Configuration of a Cisco router

7.9. IPv4 and multicast

Chapter 8. Level 4 Protocols: TCP, UDP and SCTP

8.1. Port notion

8.2. TCP (Transmission Control Protocol) (RFC 793)

8.3. The three protocol phases

8.4. The options

8.5. Adaptation to the environment

8.6. TCP flow control

8.7. Study of TCP by simulations

8.8. Network consideration of TCP

8.9. The UDP (user datagram protocol) (RFC 768)

8.10. SCTP

Chapter 9. Address Resolution and Automatic Configuration Protocols

9.1. Introduction

9.2. The address resolution protocol (ARP)

9.3. Neighbor discovery in IPv6

9.4. Initialization and auto-configuration

9.5. The domain name server (DNS) (RFC 1034, RFC 1035)

Chapter 10. Routing Protocols

10.1. Routing tables

10.2. Equipment classification

10.3. Routing table configuration

10.4. Station or router?

10.5. High-speed router

10.6. Router classificatin

10.7. Routing protocols

10.8. Autonomous systems

Chapter 11. Internal Routing Protocols

11.1. The Distant Vector algorithm

11.2. Link State algorithm

11.3. The OSPF protocol

11.4. IS-IS

Chapter 12. External Routing Protocols

12.1. Path announcing

12.2. The interconnection points

12.3. The symmetry of routes

12.4. BGP (border gateway protocol)

12.5. Route selection rules

12.6. BGP traffic analysis

12.7. Reduction of oscillations

12.8. Routing limit in the Internet

Chapter 13. Virtual Local Networks

13.1. Definition

13.2. Multicast data management

13.3. Virtual networks

Chapter 14. MPLS (Multi Protocol Label Switching)

14.1. Routing protocols’ limits

14.2. MPLS header format

14.3. Principles of operation

14.4. MPLS label D distribution protocols

14.5. Traffic engineering

Chapter 15. IP on Point-to-Point Links: PPP

15.1. Serial links

15.2. SLIP (Serial Link IP, RFC 1055)

15.3. PPP (point-to-point protocol, RFC 1661)

15.4. Configuration of routers

15.5. The RADIUS protocol

15.6. PPP over X.25 (RFC 1598)

15.7. PPP over high-speed networks

15.8. Bridging with PPP (RFC 1638)

15.9. ADSL network architecture

Chapter 16. Network Administration

16.1. Vocabulary and concepts

16.2. ASN.1 (Abstract Syntax Notation)

16.3. Definition of the MIB SNMP (RFC 1213)

16.4. Format of SNMPvl messages (RFC 1157)

16.5. Formats of SNMPv2 messages (RFC 1905)

16.6. Examples of SNMPvl traffic

16.7. MIB example

16.8. Other MIBs

Chapter 17. Security

17.1. Risks

17.2. Filtering routers

17.3. Bastion

17.4. Proxy

17.5. NAT (Network Address Translator, RFC 1631)

Chapter 18. Flow Management

18.1. Quality of service

18.2. Flow notion

18.3. Flow management

18.4. Flow measurements

18.5. Integration of services on the Internet

18.6. Differentiated services

18.7. Perspectives

Bibliography

Index

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2011

The rights of Laurent Toutain and Ana Minaburo to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data

Toutain, Laurent.

Local networks and the internet / Laurent Toutain, Ana Minaburo.

p. cm.

Summary: "This title covers the most frequently used elements of the Internet and Intranet and their development. It details the latest developments in research and covers new themes such as IP6, MPLS, and IS-IS routing, as well as explaining the function of standardization committees such as IETF, IEEE, and UIT. The book is punctuated with numerous examples and applications which will help the reader to place protocols in their proper context"-- Provided by publisher.

Includes bibliographical references and index.

ISBN 978-1-84821-068-4 (hardback)

1. Intranets (Computer networks) 2. Internet. 3. Computer network protocols. I. Minaburo, Ana. II. Title.

TK5105.875.I6T68 2011

004.6’2--dc22

2010046515

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-068-4

Chapter 1

Introduction

1.1. Why a network?

A network transmits information from point-to-point from an office, company, school, aircraft carrier or, more generally, from anywhere on the planet. Very often associated with the Internet, it has completely transformed the design of traditional computer systems. To remember this, one need only read the short story by sciencefiction writer Isaac Asimov who in the 1970s offered his vision of the computer industry evolution in the short story All the Troubles of the World1. For the 2000s, he forecast a gigantic computer called “the multivac”, which would control the entire planet. He went as far as predicting the election of the world president by this computer. Asimov writes that it encompassed Washington D.C. and its suburbs and that an army of civil servants was needed to run it.

To foresee the computer of the future, Asimov simply described the situation of the centralized computer systems of the 1970s and increased everything: the size of central units and the number of people needed to make them run. The footprint, the design and maintenance cost mean that this type of equipment is limited and reserved for important research and general interest tasks. Information is necessarily centralized in these points and resorting to a network is pointless.

What we can observe, in the 21st century, is radically opposed to Asimov’s vision. The systems are decreasing in size, increasingly powerful, numerous and specialized and their maintenance is simpler and increasingly limited. This dispersion of computing power and information is not due to the reduction of computer power. Networks are not solely responsible for this spectacular change in the design of computer systems, but they allow the interconnection of all these different “small” systems to make them cooperate and exchange information. These systems, more flexible and, in the end, more powerful and able to evolve, have gained acceptance.

Networks existed at the time Asimov wrote his book, but they were used to connect terminals to the central computer. In the current networks, information processing is most often done locally, i.e. on the computer sitting on the user’s desk or in the company, whereas information originates at the other end of the campus or planet. The information transported by these networks is not directly usable by a human being, but is meant for programs that must process it before a human being can access it. With the increase in available network rates, we are witnessing the increasing integration of new data types (for computer scientists), such as voice or animated pictures.

This vision of large and expensive centralized computing still has some consequences nowadays, which can be found in particular in the architecture of the Internet. Thus, the Internet Protocol (IP) that is used to transport data from one network point to another was conceived around this time. It was never planned that it would become the quasi-universal protocol that we know today. The addresses used be dimensioned to support slightly more than four billion pieces of equipment — a number that seemed unrealistic in the 1970s, but that causes enormous problems nowadays, since we are approaching saturation of the addressing space. Studies are underway to replace the current protocol (IPv4) with a version allowing quasi-unlimited equipment addresses (IPv6). In light of the scope of this task, this will take several years.

Mobility was also an element not taken into account at the start. In the 1970s, with computers weighing several tons and limited to air-conditioned rooms, it was unrealistic to move them from network to network. With the advent of wireless technologies and the miniaturization of equipment, these constraints have been lifted, but addressing in Internet does not take this into account. This has led to the need for a complete overhaul of the architecture of the Internet. Studies are underway in the normalization instances.

This book, through a vision organized around the Internet, will describe the main protocols, such as local Ethernet or wireless networks, and architectures such as ADSL. The organizations which participate in this standardization effort or help run the network will also be described.

1.2. Network classification

The Internet is often called a network of networks because it offers a common exchange format allowing switching from one network technology to another. These networks, for which the Internet is the link, are very diverse, but several criteria facilitate their classification.

1.2.1. Function of distance

This first criterion can be the area covered. Technologies can be divided into several categories, of which the frontiers are relatively blurred and can evolve in time with technological advances.

They are designated by WAN (Wide Area Network), MAN (Metropolitan Area Network) and LAN (Local Area Network). Table 1.1 indicates the characteristics of these different types of networks. In more recent classifications, metropolitan networks can be considered as access networks.

Table 1.1.Different types of networks

1.2.1.1. Local networks

A LAN is mainly characterized by its reduced performance: its relatively short distance and resistance to scalability (i.e. performance drops as the number of pieces of equipment connected increase) is much smaller.

A local network thus generally serves a company office, floor or building. The network and machines’ administration is usually done by the same service. The cost using a local network is mainly that of computers and cables.

Ethernet and Wi-Fi networks are the most common local network technologies. Historically the range of an Ethernet network was theorically 2.5 km, but with the progress made in electronics and in particular the decreased cost and increased reliability of interconnection equipment, the size of networks has greatly shrunk. Current cabling rules state that a wired network should not span beyond a few hundred meters. For wireless networks, coverage is around 10 meters.

Parallel to the decrease in network size, the number of users directly connected has also fallen. Historically, a network could connect at least two pieces of equipment to a few hundred users. Currently, 50 users is an acceptable number. On wired networks (such as the Ethernet) with commutation techniques preventing sharing the medium between pieces of equipment we are going back to point-to-point communications between two pieces of equipment on the network: the station and the switch; see Chapter 3.

The data rate is usually 100 Mbit/s for wired networks and varies between a few tens and 100 Kbit/s for wireless networks. Except under special circumstances, increasing the data rate for these types of networks is no longer necessary since a rate of 100 Mbit/s, in the case of wired networks, is decreasingly shared between users and is dedicated to each piece of equipment. Nevertheless gigabit technologies are spreading rapidly.

On the other hand, some networks are not increasing their speed but prefer to limit energy consumption. This is the case for Wireless Sensor Networks (WSN), which may interconnect equipment at 250 kbit/s.

1.2.1.2. Metropolitan network or access network

The separation between a local network and a metropolitan network can be very blurred. The functioning principles are sometimes quite similar. Metropolitan networks or MANs allow us to connect a certain number of sites together or to attach them to a public network. They are often referred to as a backbone.

Access networks, such as ADSL (asymmetric digital subscriber lines) or WiMAX, can be included in this category because they interconnect local networks to public networks.

If local networks need to be interconnected, the administration of a metropolitan network can be given to a common structure that groups all users or even the company itself if it is the only user of the network. Billing is flat and not based on the number of bytes transmitted. It covers the network use, maintenance and administration costs. If it consists of connecting to public networks, the cost can be based on the connection data rate.

For access networks, data rates are usually lower than for local and public networks. They generally constitute a bottleneck (sometimes deliberate when billing is based on data rate or technology).

Since it is a network most often implemented with fiber optics, and built in a protected site, the error rate is relatively low, transmission delays are reduced and routing is quite simple. The old FDDI (fiber data distributed interface) technology, covering a distance of 200 km and offering a data rate of 100 Mbit/s, is a metropolitan network.

Wired access networks, such as ADSL, are built around a point-to-point topology, but, for example, fiber optic access networks could use a shared access mode. For broadcast networks, the question is irrelevant given the nature of the medium.

1.2.1.3. Public networks

These networks (WANs) are usually meshed networks made of high data rate point-to-point links between interconnection nodes.

Historically, the data rate was relatively low; it could go as low as 50 bit/s for the Telex network and reach up to 2 Mbit/s for the users. One of the most important revolutions in the networks has been the large rise of the data rate for this type of network. For years it has represented a bottleneck for communications. Nowadays, with the progress of transmission technologies, data rates can reach several Gigabit/s or even a few Terabit/s.

Even if in most cases these networks no longer constitute a bottleneck in the transmission part, it is still difficult to switch, i.e. the data routing process in the network to one link or another. The nature and length of these lines make the error rate relatively high. Error corrector or detector codes must be used that further reduce the data rate. Errors caused by noise on the transmission line are becoming rare and are most often due to saturation of the intermediate equipment, which loses information.

Transmission delay is quite large. In addition to the propagation delay (for example due to the use of a satellite in some networks), the message is copied from node-to-node until reaching its destination.

Lastly, routing, i.e. the path the information must follow to reach its destination, may be very complex. It consists of finding the best path that, from the user’s point of view, maximizes the data rate and minimizes the transmission delay and, from the operator’s point of view, maximizes the load on all the network links. In doing so, at each moment each node would have a complete vision of the network. This would lead to a paradoxical situation where all the network capacity would be used to transmit the state of the network to the different nodes, without leaving any room for the useful traffic. A relatively complicated algorithm must be used to try to reach the optimal routing.

1.2.2. Function of the topology

Different topologies, i.e. network shapes, can be used to classify the types of network. Each topology has its strengths and weaknesses. Each topology has different corresponding access methods with their own physical medium. Figure 1.1 tries to exhaustively represent the different topologies that can be found. Only a small number of these possibilities will really be employed in network architectures:

— point-to-point links are the easiest links to operate since they do not require addressing to identify the transmitter (the message always comes from the other end) or the receiver (it propagates to the other end). In general, these links are bi-directional and do not require access management mechanisms. As soon as a piece of equipment wants to transmit a message, it can transmit it on the dedicated medium. Unfortunately, a point-to-point link only reaches one piece of equipment. To allow us to build a network, several architectures are built around point-to-point links:

Figure 1.1.Different topologies

     – complete mesh: this consists of putting point-to-point links in place between all the pieces of equipment that want to communicate. This solution is not usually economically viable because there are underused links, even in the case of local networks,

     – star: this consists of converging all point-to-point links towards a central piece of equipment that is in charge of retransmitting information towards the destination(s). This architecture is quite popular since it matches wiring used in pre-wired buildings. Wired links leaving the desks converge towards a wiring cabinet. This type of architecture usually requires an address to help identify the packet transmitter and receiver. The current Ethernet implementations are based on this type of architecture. The central equipment is called a hub or a switch, depending on the technology used,

     – ring: a ring is made of point-to-point links between all the stations that make the network form a loop. This topology was popular at one time with the token ring and FDDI, but it requires a complex management of the rights to talk. It has now been abandoned in the architecture of local networks. On the other hand, since there are two possible paths to go from one point to another, this architecture is more robust when a link is broken. SDH networks use it for this reason;

— multipoint: a multipoint link enables us to join several pieces of equipment simultaneously. This is the case, for example, with older versions of the Ethernet where a coaxial cable connectes all the equipment. A broadcast medium, such as the Wi-Fi network, is naturally broadcast since all pieces of equipment share the same frequency to communicate. It can also be found when data are transmitted on electrical wires (by PLC or power line communication).

Addressing is important in this type of network to identify where the message comes from and to which piece of equipment it is addressed. The address must necessarily be unique to identify the piece of equipment, but it does not need to specify its location (contrary to a postal address, which allows us to hierarchically find the destination), so the broadcasting network can send it to all the equipment. The multipoint network can be built from point-to-point links. For example, in the case of a star architecture, if the central equipment retransmits the binary data received on a cable towards other media, this allows the building of a broadcast network. Hubs used by Ethernet implement this function.

— NBMA (non-broadcast multiple access) networks also interconnect several pieces of equipment, but the absence of broadcast makes their localization more difficult. These networks are usually built around a mesh architecture, such as the telephone network.

As for broadcast networks, each piece of equipment must also have a unique address on the network. If the network is relatively large, the address will have to be structured hierarchically to facilitate equipment localization. The structure can be based on geography (continent number, followed by country, network, switch, user, etc.). As broadcast is not possible to locate a correspondent, centralized or distributed address books must be used to find the destination address. The Internet has chosen a particular hierarchical addressing that relies on the network topology and not terrestrial geography.

Lastly, we can add a functioning mode for broadcast networks called Master/ Slave: these networks are mainly found in architecture associated with telephony (RNIS, GSM, etc,). All the pieces of equipment (slaves) hold a dialog with a central piece of equipment (master) and the latter manages the rights to talk. There is no possible direct dialog between two slaves; dialog must imperatively go through the master. The main interest in this architecture is that is can simplify the implementation of slaves. The access point mode of the Wi-Fi network corresponds to this functioning mode.

1.3. Interconnection networks

Figure 1.2 represents the network of the computer center of a large American university in the 1980s. It shows the interconnection between the different types of networks that have been previously presented:

— Up to the right and in the middle, we find local Ethernet networks and a token ring at 10 Mbit/s and 16 Mbit/s, respectively. On this network we find core processing units (CPUs), graphics terminals (XWindows and more recently the VNC or virtual network computer), and terminals. Each research project or administration service has its own local network.

— These networks are connected by a MAN (FDDI) at 100 Mbit/s. Beyond these common pieces of equipment, all services are directly connected to the MAN to benefit from its transmission speed. In this example, teams can access a supercomputer or a storage server.

— Connections with the WAN are made with T1/T3 links located in the upper left of the figure.

Nowadays, media in shared access mode are increasingly being replaced by switching techniques. The topology is then a star. Pieces of equipment are connected by point-to-point links to a concentrator, which can itself be connected to others.

Having been able to adapt in participating in the evolutions of technologies and increasing the data rate, the Ethernet is more frequently imposing itself at the local network as well as the metropolitan network level. The Ethernet can also transmit data over longer distances. Nevertheless, the Ethernet is still a level 2 technology, resistant to scalability: the more pieces of equipment, the slower the performance. This renders it incompatible with the needs of a public network.

Figure 1.2.Network of a computing center

In the example in Figure 1.2, if the network were designed today at the LAN level, network technologies such as the token ring would disappear and be replaced by the Ethernet. The Ethernet would also have evolved from a shared bus to a star topology, connecting terminals point-to-point to a wiring cabinet where active equipment (hubs, switches, etc.) would emulate the behavior of a shared medium. The rate would go up to 100 Mbit/s. FDDI would also change to 100 Mbit/s or even 1 Gbit/s Ethernet, over fiber optics. The server equipment would still be connected to this network.

The technological evolutions of the Ethernet make its use over longer distances possible and at important data rates in the order of 10 Gbit/s. The Ethernet could replace other level 2 technologies used in WANs, to create a unique level 2 protocol, making interconnection easier.

1.4. Examples of network utilization

Applications using networks are numerous and cannot all be listed. Among the most often used in office tools, we find:

— Electronic mail services: users exchange messages (mainly text, sound data or images).

— File-sharing services: the network acts as a virtual disk. The user feels like the data are on his or her machine when in fact they are located on a remote server. This service makes machines generic (each one will find data from its working environment on all the machines in its service) and facilitates deployment and installation of new software (these are only copied on the server’s disk).

— File transfer services: this must not be confused with the previous services. They consist of looking for a program or data on a remote machine on which the user has no account.

— Peripheral sharing services: these allow each user to access unique or expensive network resources, such as access to the Transpac network or a laser printer.

— Virtual terminal services: these allow the user to connect and work on a remote machine.

— Information services such as the Web: these allow a user to browse within multimedia and hypermedia information.

The Internet, as the postal network, is designed to deliver information to a receiver. Based on how the information exchange proceeds, applications (programs using the network) can be classified as client/server, push or streaming type.

The client/server mode encompasses the most commonly used applications: the client is a program that sends a request to the server, which returns a result. The first applications to use this mode have been, mainly:

— FTP (file transfer protocol), which allows us to copy files from one machine to another. It requires an account on the machine or an “anonymous” generic username to access public files.

— Telnet implementing a terminal emulation, i.e. enabling us to connect remotely to a machine to run commands in a textual environment.

— Web browsers (Internet Explorer, Netscape, etc.) are client applications with a simple and friendly interface that does not require us to learn commands to get information, thanks to hypertext links. Web browsers contributed to the explosion of the Internet, such that this term, in the minds of many, is synonymous with the Internet, when it is only some possible applications.

— etc.

For “push technology” applications, data arrive on the applications without any particular request being made. The first such application has been electronic mail or email. Outside email, this type of application was fashionable at the end of the 1990s, to store programs or data on machines that the user will be able to consult when he or she is not connected to the network.

In the previous two modes, data returned by the server are limited in size and already known, but that is not the case for data streaming. Here, information is being sent in a continuous flow to the user. This is required when broadcasting a radio or television program but also for telephony applications. In general, the quality offered by this type of application varies widely, based on how busy the network is. Numerous research works are being conducted to improve this quality, in particular to reduce the packet transmission time through the network in order to increase the necessary interactivity of a telephonic communications.

Finally, recently, peer-to-peer (p2p) networks have appeared. In this network, each machine is both server and client. This allows data to go through a central server that regroups the information. A user must know the address of at least one other machine. He or she will learn one of the other peer-to-peer network members’ addresses from this. He or she will then be able to search for the information he or she wants by querying all the machines or, to better resist the scaling factor, certain nodes centralizing the information. Once he or she has located the information he or she wants, he or she will connect to the machine that has it.

1.5. The Internet network

The Internet network hides the specificities of the different transmission supports by offering a unique access method — a uniform addressing plan based on the topology. The Internet will rely on all the network types seen previously.

The Internet relies on the interconnection of networks. The model is strongly decentralized, each provider managing only part of the network. Some Internet access providers have built worldwide networks (or networks covering a large part of the planet). These networks share information (data and network locations) at interconnection points. Other access providers have a regional reach (continent, country, etc.); they allow smaller access providers to connect. Access providers use WANs or specialized links to build their infrastructure.

Clients are connected to an access provider. Based on their importance, they are connected to access providers covering various areas. An individual will be able to connect to a provider covering his or her town, this provider being connected to a provider covering the country, the latter to a provider covering Europe, and so on. A multinational company could be connected directly to a provider managing a worldwide network, or even be its own provider.

1.5.1. History

The Internet has modified communication between people by suppressing barriers associated with distance; the same information is available instantly anywhere on the planet. Traffic on the network explodes; it is said to double about every 100 days. The network is able to carry high-quality video and enable us to watch movies on demand or access many television channels. Radio stations on the Internet gave a preview of this evolution. Distributed games using the network are also a succesful application.

Contrary to the legend, the initial goal of the work on what would become the Internet was not to implement a completely decentralized infrastructure able to resist nuclear attacks, but to unify the connection techniques so that a terminal could connect remotely to computers from different manufacturers. At the time, each computer manufacturer defined its own standards and methods to connect terminals to central computers. The project was to develop a universal network technology, flexible enough to adapt itself to different manufacturer’s equipment. In 1967, the first plans of a network called ARPANET were presented by an American defense agency called the Advanced Research Project Agency. The first experiments started in September 1969 between UCLA (University of California Los Angeles) and Stanford, near San Francisco. At the end of 1969, the network consisted of four nodes. As of 1973, the first international connections were made with University College in London and the Royal Radar Establishment in Norway. At this point, the network started to leave the university and military circles.

The protocol used during the first years showed its limit and on January 1, 1983 it was replaced by the TCP/IP protocols still used today. These protocols, integrated in the Unix BSD (Berkeley Software Distribution) operating system, spread very quickly in university circles. In conjunction with the availability of the local Ethernet network technology, sites started to put scientific publications online and researchers to communicate via email. Information available on the network created a need for connection between sites, so that universities could share their resources. A virtuous circle started: the more users who were connected to the network, the more interested people become, which attracted even more users.

In 1988, Van Jacobson proposed a solution to the problem of network saturation that limited network performances. This mechanism removed an argument put forward by proponents of the connected mode, mainly in Europe, who pushed for protocols such as X.25, which manage this type of problem better.

In 1992, the network connected more than one million pieces of equipment. Commercial traffic became an increasingly important part, but the Internet hit its biggest crisis. Addresses were wasted during the first years because nobody thought it would take off to the extent it did. The rules for address attribution were reviewed to limit this waste. At the same time, work started to design a new version of the IP protocol (called IPv6), enabling us to address a larger number of networks. Complete saturation was forecast for about 2010, which gave enough time to prepare the transition. The same year, the Internet Engineering Task Force conferences started to be broadcast on the experimental multicast network: Mbone.

1.5.2. Functioning principle

1.5.2.1. Protocols

The Internet, as with other computer networks, has its origin in the works of Professor Leonard Kleinrock. This researcher, then at MIT, published a series of articles on the theory of packet communication at the beginning of the 1960s. A packet is a computer message of limited size made of two parts. In the first one, called the header, the sender puts information necessary for the network to forward the packet to the recipient. In the second part, the sender inserts a part of the information to be transmitted (piece of a file, an image, etc.).

The computer data transmission networks usually operate differently to the telephony network. In the latter network, a circuit is established during the duration of the call. This mechanism guarantees a very small network delay essential to a good interactivity associated with voice transmission. In packet mode networks such as the Internet, links are only allocated during data transmission. In the Internet terminology, pieces of equipment inside the network that copy packets from one link to another are called routers. Allocating links for only the duration of the packet transmission enables the system to successively pass packets belonging to different users.

The works of Leonard Kleinrock have thus enabled us to increase the amount of information transmitted via the infrastructures and reduce the costs of transmission.

The information delay is no longer constant and depends heavily on the network load. Packets can stay longer in the routers, or even be destroyed if their memory is saturated. Nevertheless, with considerable increase in transmission speed, these constraints are decreasingly inconvenient. A lot of research works are trying to unify all the network types with packet technology.

The Internet network contains mainly two protocols. The IP (Internet Protocol), also called IPv4, manages network interconnection. Pieces of equipment called routers interconnect the networks. These pieces of equipment are as simple as possible in order to be robust. Routers analyze the recipient address contained in the packet header to find the information necessary to route it towards the recipient. The TCP (Transmission Control Protocol) is only taken into account by the packet sender and recipient. By interlocking the transmission rate to the network capacity, by detecting and correcting with retransmission of a packet, it allows increased control and makes data transmission more viable. TCP is mainly used to transmit computer data. Transmission of multimedia data (voice-over-IP, live radio or TV, etc.) is difficult to do with the TCP protocol because of the control of packets. For this type of application, we prefers to use a much simpler protocol called the UDP (User Datagram Protocol). The Internet implies that end equipment has a large processing power, mainly computers, to adapt to the network conditions. In telephony the approach is different and consists of making the simplest terminal equipment for mass distribution.

At level 2, protocols enable data transmission by adapting to the specificities of the physical medium. Each of these protocols leads to the definition of a particular frame format and a specific addressing. The role of a level 3 protocol is to erase these specificities. Contrary to level 2 protocols, it is better to have fewer level 3 protocols so that the largest numbers of equipment can communicate, but also to propose a unique programming interface facilitating the development of applications.

The IP implemented in the Internet increasingly plays this unification role. Without analyzing in depth the reasons that led to this situation, several points that have favored this emergence can be given:

— the dynamism and reactivity of the groups in charge of standardization (see section 6.1), which have been able to adapt the protocols to the network evolution;

— a completely decentralized management of the network. Each site autonomously manages its part of the network and interconnects with the others. This favors the network growth;

— a controlled attribution of level 3 addresses that avoid any conflict or ambiguity. With the level 3 IPX protocol, which can be found in Novell networks, network numbers are chosen by site managers. Several sites can have the same network number, making their interconnection impossible.

1.5.2.2. Network structure

The Internet is too vast and too dynamic a network to be managed by a single team. The Internet is structured around domains with management autonomy, called autonomous systems. In practical terms, a domain administrator can define the shape of his or her network, add equipment and configure them the way he or she wants, but canot modify anything in the other domains. For traffic to be routed in between domains, these must exchange their knowledge of the network, i.e. addresses they know. At a macroscopic level, the Internet can be seen as a network of domains, thus its “network of networks” nickname. The Internet is structured so that:

— end domains are at the origin or recipients of the packets. These are often companies permanently connected to the Internet;

— transit domains which transport the generated packets or destined to the end domains. This last category can be in turn divided in several families:

     – IAP (Internet access provider). Their role is to cover a given territory to offer a packet level connection. For the general public, it is usually a temporary connection via the telephony network. If the company offers, in addition to access, value-added services such as email, access to different chat forums, web servers where the Internet pages are stored, we then refer to it as an ISP (Internet service provider),

     – operators cover a larger territory. They can offer a worldwide coverage to IP packets. They aggregate traffic IAPs. To be able to reach any recipient, operators interconnect with each other through exchange points.

1.6. Structure of this book

This book is centered on the protocols that can be found in the Internet network, to build the network as well as transport information.

Chapter 2 presents normalization works on local networks and introduces the IEEE (Institute of Electrical and Electronics Engineers) model, which applies to local networks. It also examines normalized cabling.

Chapter 3 discusses the Ethernet and IEEE 802.3 networks and their evolution towards high data rates.

Chapter 4 deals with Logical Link Control (LLC) and Sub-Network Access Protocol (SNAP) layers that allow level 3 protocols to be carried in some local networks.

Chapter 5 analyzes local network interconnection through bridges and explains the spanning tree algorithm.

Chapter 6 describes the organization of the Internet: organisms that allow the Internet to function as well as the rules applied to standardize protocols.

Chapter 7 discusses IP (versions 4 and 6) and the Internet Control Message Protocol (ICMP), which is associated with them.

Chapter 8 is dedicated to level 4 protocols and describes the adaptation rules to network constraints that equipment must implement to obtain better performances. In particular, it presents streaming control mechanisms implemented in TCP, which are a key element of the Internet’s success. This chapter also presents a possible evolution of the TCP protocol with the Steam Control Transmission Protocol (SCTP).

Chapter 9 describes the protocols of address resolutions between layers 2 and 3: Address Resolution Protocol (ARP), Reverse Address Resolution Protocol (RARP), bootp and Dynamic Host Configuration Protocol, (DHCP) as well as between layers 7 and 3: Domain Name Systems (DNS).

Chapter 10 examines the general principles of routing algorithms.

Chapter 11 presents internal routing algorithms (Routing Information Protocol, RIP, Open Shortest Path First, OSPF and Intermediate System-to-Intermediate System, ISIS, Protocol) used in an administration domain.

Chapter 12 covers the external routing protocols (Border Gateway Protocol, BGP), which enables us to design worldwide IP networks.

Chapter 13 looks at virtual local networks (VLAN). It might seem strange to place this chapter so far from the chapters dedicated to local networks. In fact, in practice virtual local networks have influence on IP addressing and need a configuration and knowledge of the routers.

Chapter 14 is dedicated to Multi protocol Label Switching.

Chapter 15 is dedicated to the implementation of IP on serial links (point-to-point Protocol, PPP).

Chapter 16 treats of the administration of network equipment with Simple Network Management Protocol (SNMP).

Chapter 17 discusses the problems associated with network security. In particular, this chapter presents firewall architectures.

Chapter 18 covers multimedia streaming management on the Internet and presents a resource reservation approach in routers (RSVP) and the service differentiation architecture.

Figure 1.3 summarizes, in the format of a protocol stack, the protocols developed in this book as well as the page numbers where they are described. The values on the horizontal lines indicate the number used to designate this protocol.

Figure 1.3.Internet Protocol Stack

1. Published in Nine Tomorrows, by Del Rey, January 12, 1985.

Chapter 2

Standardization and Wiring

2.1. The IEEE 802 committee

Efforts to standardize local networks started in 1979 under the direction of the IEEE (Institute of Electrical and Electronics Engineers). The goal of standardization was to adapt layer 1 and 2 of the OSI (Open System Interconnection) model to the specificities of local and metropolitan networks. In February 1980, the working group was named 802 (80 for the year and 2 for the month).

The goal of the IEEE 802 committee is to develop a standard enabling the transmission of information frames between two computer systems of current design, through a medium shared between these systems, whatever their architecture.

2.1.1. Traffic types and constraints

To adapt the OSI model to local networks, we must take into account application specificities that cause the traffic to have different characteristics:

— file transfers: the data rate must be high and error rate very low; propagation delays can be high;

— office applications: data rate can punctually be high, error rate must be low, and propagation delays must be low;

— command/control process: data rates are relatively low but transmission times must be bound, error rate must be low;

— images/voice transmission: data rates are relatively high, transmission time must be as low as possible. On the other hand, error rate can be higher.

The ISO reference model is built from a mesh architecture, and equipment is connected by point-to-point links. In local networks, the way to connect equipment is different. These networks are built on a transmission medium shared by all equipment. The main concepts that need to be added to the ISO reference model are:

— addresses to be able to differentiate each piece of equipment at level 2;

— an access method that guarantees that only one piece of equipment will send data at any given time on the shared medium.

2.1.2. Constraints

Initial constraints for local networks were the following:

— they could support at least 200 stations;

— a coverage of at least 2 km for local networks and 50 km for metropolitan networks;

— enable data rates between 1 Mbit/s and 100 Mbit/s;

— they needed to authorize the insertion and removal of stations without disruption;

— they needed to have an error rate lower than 10−14;

— they needed to offer individual or group addressing to stations;

— they needed to conform with the OSI reference model;

— the access control to the transmission medium must enable:

     – simple initialization during power up,

     – reconfiguration in the case of a station breakdown,

     – equity of medium access among members,

     – possible management of priorities;

— a shared transmission medium imposing that only one station transmits at a time;

— for data transfer:

     – error detection and recovery or fault masking,

     – compatibility between different manufacturers,

     – robustness in case of a station breakdown.

With the progress made in electronics and signal processing, the objectives, architectures and topologies have evolved. The shared medium is increasingly being thrown in favor of a star topology around active interconnection equipment. For wired networks, switching, which consists of sending information to the intended reception, is also being more frequently used at the expense of broadcasting towards all equipment. Of course, for wireless networks such as Wi-Fi, the transmission medium is still broadcast, since all equipment on this type of network shares the same frequency.

The range of local networks has been reduced. It is now of only a few hundred meters. The number of stations is also now limited to about 50 machines per network.

On the other hand, for metropolitan networks, distance constraints have been removed. It is possible to build a network without geographical limitations. The amount of equipment connected will still, however, be limited.

At last, Ethernet technology has been adopted in the local network category as well as for metropolitan networks and even increasingly often in operator networks. Chapter 3, page 37, describes this evolution.

2.2. The standards

In December 1981, three methods of access to the transmission medium were considered. This multiple offer made some suggest that the group “could not make a decision”. In reality, however, the just as there are several means of transportation for people and goods, there are several ways to access the physical medium based on the type of application. The three methods were CSMA/CD (Carrier Sense Multiple Access/Collision Detect), token bus and token ring (see Figure 2.1). These methods are placed in a MAC (Medium Access Control) sublayer.

In 1982, the 802 committee was reorganized and divided into several groups. Figure 2.1 gives its general architecture. Some groups are now dormant (grayed-out in Figure 2.1):

— The IEEE 802.1 group for the network general architecture:

     – the layer architectural model presented in Figure 2.1;

     – address format (see section 2.2);

     – network interconnection techniques by bridge (see section 5.3);

     – etc.

— The IEEE 802.2 group for the LLC (Logical Link Control) sublayer: a protocol with three classes called LLC type 1, LLC type 2 and LLC type 3, to manage data transfer (see Chapter 4, page 95). These three classes are respectively:

Figure 2.1.IEEE model

     – a simple service in unconnected mode. Retransmission after error, sequencing control and duplication are left to the next layer.

     – a service in connected mode similar to services offered by the HDLC (High level Data Link Control) protocol: acknowledgement, sequencing control.

     – an unconnected service but with acknowledgement allowing low transmission times and that are secured.

— The IEEE 802.3 group for CSMA/CD (Carrier Sense Multiple Access/ Collision Detect): the topology is in bus, the access principle is simple. Pieces of equipment listen to the channel before transmitting. If the channel is silent, the station can send its frame, otherwise transmission is delayed. Instead of avoiding simultaneous transmission by several sources (called collision) at all costs, the protocol tries to resolve these conflicts. Stations implicated are delayed randomly before attempting a new transmission. The protocol is very simple to implement, it does not require information exchange between equipment to manage the right to talk. This simplicity translates into very low cost equipment. The protocol and its variant, Ethernet, are presented in Chapter 3, page 54;

— The IEEE 802.4 committee for token bus was defined by General Motors for industrial applications because, contrary to the Ethernet or IEEE 802.3, this protocol enables the upper limit for the transmission of a message on the network to be guaranteed.

Figure 2.2.Token bus

The right to talk is symbolized by ownership of a special message called the token. The transmission of a message is based on the natural broadcast properties of local networks. The transmission of the token must be point-to-point, since only one station can have ownership. A virtual ring must be artificially built above the bus, enabling the token to circulate. A large part of the protocol complexity will come from this management. Figure 2.2 enables us to illustrate some of these problems. Let us assume that station C breaks down; station D will send it the token, which will be lost. Station D will have to enter a ring reconfiguration phase to find another successor. Another problem arises when a station is inserted into the virtual ring; station G cannot be inserted into the ring because no station sends it the token. The protocol plans that, periodically, active stations must test for the presence of new equipment.

The ring management protocol on a bus is relatively complex, implying that the cards implementing it run an algorithm. This requires a CPU, memory, etc, on the card. Moreover, the deterministic guarantees offered by the token bus are not needed in office applications. This protocol is almost no longer used.

— The IEEE 802.5 group for token ring: the mechanism of the right to talk is also based on a token, but its circulation is simplified because a physical ring exists. IBM announced the first token ring prototypes in 1981. In 1985 it was the first network with a data rate higher than 4 Mbit/s commercially available at the same time as the ISO 8802.5 standard.

The functioning principle is relatively simple in broad terms. A communication medium composed of N point-to-point links circularly connects all N stations wanting to be members of the network (see Figure 2.3).

For the network to function, one and only one station can send data at any given time. The right to talk is symbolized by ownership of a token. It is a special frame that circulates from station to station following the network ring topology. If a station wants to send a message, it awaits reception of the token, removes it from the network, sends its message, then reinserts the token into the ring. If a station has nothing to send, it lets the token go.

Figure 2.3.Ring topology

For a frame to reach its destination, it must be copied from station to station. The recipient continues the retransmission, while keeping a copy for itself. When the message has completed a full lap, the sender removes it from the network by not recopying it on the other medium. Since the message does a full lap it provides support for multicast, but also serves as acknowledgement for the sender, which sees its own message come back intact.

To resolve these issues, a star positioning is preferred to a ring one. A central piece of equipment will internally include the ring topology. A double wiring will enable the signal to go to and come back from the station (see Figure 2.4).

The MAU (medium attachment unit) equipment, although it does not modify the signal, must be “intelligent” enough to detect a wire cut, a station breakdown or a power down. In these cases, the MAU must close the circuit, as indicated in Figure 2.4.

Figure 2.4.Star topology

Although the channel access method presents some advantages, such as bound frame transmission delays and the ability to define priorities, this technology is losing fast to the Ethernet. Indeed, it is not as reactive as the Ethernet in adapting to new transmission modes.

— In 1990, the IEEE 802.6 group was added to deal with metropolitan networks (MAN). This protocol, also called DQDB (Distributed Queue Dual Bus), is based on two buses carrying information in opposite directions. At each end, a generator produces slots in which pieces of equipment are able to send their message.

When a station wants to send messages, it determines which bus will enable it to reach the addressee. It positions a flag in the slot of the other bus, indicating to upstream stations its intent to transmit. In the station, an access mechanism, based on counters, enables the station to determine the free slot in which the message will be sent. The DQDB protocol was going to be used in metropolitan networks to transmit both telephone communications and data. It arrived too late, because with technological advances in electronics it was better to use a star topology, such as ATM, than a shared medium topology1.

— Two technical advisory groups (TAGs) were created to serve as liaison with other groups and to help technically choose the right technologies (802.7 and 802.8). These two TAGs do not produce standards. The document produced by the 802.7 group specifies the design, installation and test parameters of networks using frequency encoding of binary information (10BROAD36, IEEE 802.4, etc.). The frequency encoding enables data multiplexing and the coexistence of networks of diverse natures (data, video, etc.) on the same medium. The IEEE 802.8 working group deals in a similar way with wiring in optical fiber for local and metropolitan networks.

— The IEEE 802.9 group, or isoEthernet, standardizes access techniques for networks integrating voice and data. The cost of wiring is an important component of a network installation cost. To wire an office, two networks must be used: the telephone network and the computer network. The standard enables us to share networks and data networks on a same medium ISDN (IEEE 802.x or FDDI). In fact, the current tendency is to carry multimedia data such as voice-over-IP on local networks rather than physically share the medium between two specialized networks.

— The IEEE 802.10 group deals with transmission security. Transmission security is not ensured in local networks. It is based on a broadcast medium. It only requires a PC connected to the network to capture all the traffic and passwords that circulate. The IEEE 802.10 protocol proposes in particular to encipher data transmitted between equipment. Issues associated with the law, however, have slowed down its deployment. Moreover, other enciphering techniques have been developed for protocols of the network layer, which further reduces interest in this protocol.

We find a deviated use of this protocol in the management of virtual networks (see Chapter 19);

— The IEEE 802.11 group deals with wireless networks or WLAN (Wireless LAN). In addition to the high cost, the need to connect to network equipment is not adapted to the new constraints associated with the use of laptop computers. The IEEE 802.11 standard enables the transmission of information with a data rate ranging from 1 to 2 Mbit/s using radio waves in the 2.4 GHz band or infrared links. The transmission range can be 100 m, but in offices where there are numerous obstacles it is reduced to about 30 m.

— The IEEE 802.12 group proposes an alternative to 100 Mbit/s networks, also called 100VG-AnyLAN because it uses wiring adapted to voice (VG: voice grade) to transmit data at 100 Mbit/s. This protocol is also compatible with the format of IEEE 802.3 and IEEE 802.5 frames, therefore its commercial name of AnyLAN.

The central element of 100VG-AnyLAN networks is a hub. It has ports that enable us to connect computer equipment. It also has a special port that enables it to connect to another 100VGAnyLAN hub. Hubs can be cascaded on three levels.

This technology has failed to establish itself against evolution of the IEEE 802.3 standard towards high data rates.

— The IEEE 802.13 group does not exist because of superstition about the number 13.

— The IEEE 802.14 group, created in 1996, deals with digital transmission of cable TV networks.

— The IEEE 802.15 group, created in March 1999, is involved with wireless personal area networks (WPAN). Several subgroups deal with different technologies:

     – IEEE 802.15.1 has taken over the bluetooth standard defined by several manufacturers. It consists of being able to communicate at about 1 Mbit/s in a 10-m range around an individual;

     – IEEE 802.15.2 studies the integration of wireless personal and local area networks, which can use the same frequencies in different ways;

     – IEEE 802.15.3 defines high data rate WPANs (more than 20 Mbit/s);

     – IEEE 802.15.4, on the contrary, defines very low data rate wireless networks with very low power consumption. This technology can be used for sensor networks. The MAC part and application protocols are known under the commercial name ZigBee;

     – IEEE 802.15.5 deals with mesh networks;

     – IEEE 802.15.6 defines Body Area Networks (BANs).

— The IEEE 802.16 group deals with broadband wireless access networks (BWA: Broadband Wireless Access). The commercial name is WiMAX.

— The IEEE 802.17 group, called RPR (Resilient Packet Ring) deals with reconfiguration problems of SDH rings.

— Two TAGs — IEEE 802.18 and IEEE 802.19 — deal respectively with frequency management aspects and cohabitation of different IEEE standards between themselves.

— The IEEE 802.20 group is an alternative to WiMAX (IEEE 802.16), by integrating the aspects of mobility.

— The IEEE 802.21 group occupies itself with the hand-off from one IEEE technology to another by a mobile user.

— The IEEE 802.22 group uses the ultra-high frequency/very high frequency spectrum unused by TV to build wireless regional networks.

For a new project to be studied, it needs a project authorization request to be voted. Its designation depends on the nature of the study. If the subject can be examined by a sub-committee, it is referenced by a letter after the name of the sub-committee. Documents produced will be integrated in the next revision of the standards. A capital letter designates an autonomous document; whereas a small letter indicates a complementary document. There is usually no correlation between capital and lowercase letters. For example, the IEEE 802.1p document is a complement to the IEEE 802.1D document describing bridging in local networks. If the works deviate too far from the existing sub-committees, a new sub-committee is created.

2.3. IEEE 802.1 addressing

Although access methods are different, station addressing is the same. The IEEE 802.1 standard proposes two types of addresses: a short one on 16 bits for local area networks not interconnected, and a 48-bit address for interconnected networks. The 16-bit type is used for WPAN networks, such as IEEE 802.15.4. Figure 2.5 represents these two possible address formats.

The universal address (bit U/L) is managed by an international organization (the IEEE), whereas the local address is chosen by the network administration.

Figure 2.5.Format of a MAC address

A universal address (i.e. with bit U/L set to 0) is divided into two parts. IEEE attributes to card vendors (or manufacturers) the three left bytes also called the OUI (Organizational Unit Identifier). OUIs are attributed to companies requesting them at a cost of $1,250. The three right bytes are used to designate the serial number in the vendor’s production. By construction, each address is unique. On the other hand, we cannot expect any logic in the numbering when considering a particular network. An inexhaustive list of vendors’ addresses can be found on the IEEE web server2. Table 2.1 gives some examples.

The universal address enables simplification of network management since the administrator does not have to attribute the values. For token ring networks, we usually prefer using a local address identifying the ring number then the equipment number on this ring.

MAC universal addresses are used to uniquely designate a station in the world. For group addressings, there are two methods:

— Broadcast: the broadcast address is unique and recognized by all stations. This address is equal to FF-FF-FF-FF-FF-FF (all bits set to 1). All stations connected to the local area network read frames carrying this address. Filtering, to determine whether the frame is indeed intended for the station, is done by higher level layers.

Table 2.1.Codes reserved to vendors

Start of MAC address (in hexadecimal)

Vendor

00-00-0C

Cisco

00-00-1D

Cabletron

08-00-20

Sun

08-00-2B

DEC

08-00-5A

IBM

— Multicast or restricted broadcast: the major disadvantage of broadcast comes from the message filtering by higher level layers. For each broadcast message, the MAC layer wakes up the higher level layers. Filtering is done by the operating system and consumes machine resources (CPU, memory, etc.). This translates into loss of performance for all the network stations.

For multicast, stations that want to access a service (or group) must explicitly subscribe. They give to the component the group MAC address. When the component recognizes a packet with a previously registered group address, it transmits this packet to the higher level layers. Stations that have not registered any particular multicast address filter these frames out as frames that were not intended for them. Filtering is done by the communication controller at the MAC level and does not penalize the station’s performances.

A broadcast frame starts with a bit set to 1. The RFC 1700 gives examples of multicast addresses. Table 2.2 illustrates some of these addresses.

2.3.1. MAC address

The representation of data circulating on the network can sometimes cause problems and create confusions in the reading of tables. IEEE considers that the first transmitter bit is the least significant. This representation is not intuitive because it does not correspond to the natural reading order from left to right. The hexadecimal value 0x7A (or 0111 1010) indicates that bits 0, then 1, then 0… are transmitted on the physical medium. If we write these bits in the order of transmission, we yield the binary value 0101 1110 or a hexadecimal value of 0x5E. The latter representation is used by the Ethernet or the Internet.

Table 2.2.Examples of multicast addresses

MAC Address

from 01-00-5E-00-00-00 to 01-00-5E-7F-FF-FF

Internet multicast (RFC 1112)

from 01-00-5E-80-00-00 to 01-00-5E-FF-FF-FF

Internet address reserved by IANA

09-00-09-00-00-01

HP Probe

This can be noted in the multicast addresses given in Table 2.2, the value of the first byte being used to indicate a broadcast frame is 0x01 and not 0x80.

2.3.2.