Wireless Mobile Internet Security - Man Young Rhee - E-Book

Wireless Mobile Internet Security E-Book

Man Young Rhee

0,0
95,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The mobile industry for wireless cellular services has grown at a rapid pace over the past decade. Similarly, Internet service technology has also made dramatic growth through the World Wide Web with a wire line infrastructure. Realization for complete wired/wireless mobile Internet technologies will become the future objectives for convergence of these technologies through multiple enhancements of both cellular mobile systems and Internet interoperability. Flawless integration between these two wired/wireless networks will enable subscribers to not only roam worldwide, but also to solve the ever increasing demand for data/Internet services. In order to keep up with this noteworthy growth in the demand for wireless broadband, new technologies and structural architectures are needed to greatly improve system performance and network scalability while significantly reducing the cost of equipment and deployment.

Dr. Rhee covers the technological development of wired/wireless internet communications in compliance with each iterative generation up to 4G systems, with emphasis on wireless security aspects. By progressing in a systematic matter, presenting the theory and practice of wired/wireless mobile technologies along with various security problems, readers will gain an intimate sense of how mobile internet systems operate and how to address complex security issues.

Features:

  • Written by a top expert in information security
  • Gives a clear understanding of wired/wireless mobile internet technologies
  • Presents complete coverage of various cryptographic protocols and specifications needed for 3GPP: AES, KASUMI, Public-key and Elliptic curve cryptography
  • Forecast new features and promising 4G packet-switched wireless internet technologies for voice and data communications
  • Provides MIMO/OFDMA-based for 4G systems such as Long Term Evolution (LTE), Ultra Mobile Broadband (UMB), Mobile WiMax or Wireless Broadband (WiBro)
  • Deals with Intrusion Detection System against worm/virus cyber attacks

The book ideal for advanced undergraduate and postgraduate students enrolled in courses such as Wireless Access Networking, Mobile Internet Radio Communications. Practicing engineers in industry and research scientists can use the book as a reference to get reacquainted with mobile radio fundamentals or to gain deeper understanding of complex security issues.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 725

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Preface

About the Author

Acknowledgments

Chapter 1: Internetworking and Layered Models

1.1 Networking Technology

1.2 Connecting Devices

1.3 The OSI Model

1.4 TCP/IP Model

Chapter 2: TCP/IP Suite and Internet Stack Protocols

2.1 Network Layer Protocols

2.2 Transport Layer Protocols

2.3 World Wide Web

2.4 File Transfer

2.5 E-Mail

2.6 Network Management Service

2.7 Converting IP Addresses

2.8 Routing Protocols

2.9 Remote System Programs

2.10 Social Networking Services

2.11 Smart IT Devices

2.12 Network Security Threats

2.13 Internet Security Threats

2.14 Computer Security Threats

Chapter 3: Global Trend of Mobile Wireless Technology

3.1 1G Cellular Technology

3.2 2G Mobile Radio Technology

3.3 2.5G Mobile Radio Technology

3.4 3G Mobile Radio Technology (Situation and Status of 3G)

3.5 3G UMTS Security-Related Encryption Algorithm

Chapter 4: Symmetric Block Ciphers

4.1 Data Encryption Standard (DES)

4.2 International Data Encryption Algorithm (IDEA)

4.3 RC5 Algorithm

4.4 RC6 Algorithm

4.5 AES (Rijndael) Algorithm

Chapter 5: Hash Function, Message Digest, and Message Authentication Code

5.1 DMDC Algorithm

5.2 Advanced DMDC Algorithm

5.3 MD5 Message-Digest Algorithm

5.4 Secure Hash Algorithm (SHA-1)

5.5 Hashed Message Authentication Codes (HMAC)

Chapter 6: Asymmetric Public-Key Cryptosystems

6.1 Diffie–Hellman Exponential Key Exchange

6.2 RSA Public-Key Cryptosystem

6.3 ElGamal's Public-Key Cryptosystem

6.4 Schnorr's Public-Key Cryptosystem

6.5 Digital Signature Algorithm

6.6 The Elliptic Curve Cryptosystem (ECC)

Chapter 7: Public-Key Infrastructure

7.1 Internet Publications for Standards

7.2 Digital Signing Techniques

7.3 Functional Roles of PKI Entities

7.4 Key Elements for PKI Operations

7.5 X.509 Certificate Formats

7.6 Certificate Revocation List

7.7 Certification Path Validation

Chapter 8: Network Layer Security

8.1 IPsec Protocol

8.2 IP Authentication Header

8.3 IP ESP

8.4 Key Management Protocol for IPsec

Chapter 9: Transport Layer Security: SSLv3 and TLSv1

9.1 SSL Protocol

9.2 Cryptographic Computations

9.3 TLS Protocol

Chapter 10: Electronic Mail Security: PGP, S/MIME

10.1 PGP

10.2 S/MIME

Chapter 11: Internet Firewalls for Trusted Systems

11.1 Role of Firewalls

11.2 Firewall-Related Terminology

11.3 Types of Firewalls

11.4 Firewall Designs

11.5 IDS Against Cyber Attacks

11.6 Intrusion Detections Systems

Chapter 12: SET for E-Commerce Transactions

12.1 Business Requirements for SET

12.2 SET System Participants

12.3 Cryptographic Operation Principles

12.4 Dual Signature and Signature Verification

12.5 Authentication and Message Integrity

12.6 Payment Processing

Chapter 13: 4G Wireless Internet Communication Technology

13.1 Mobile WiMAX

13.2 WiBro (Wireless Broadband)

13.3 UMB (Ultra Mobile Broadband)

13.4 LTE (Long Term Evolution)

Acronyms

Bibliography

Index

This edition first published 2013

© 2013 John Wiley and Sons Ltd

Registered office

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Rhee, Man Young.

Wireless mobile internet security / Man Young Rhee.— Second edition.

pages cm

Includes bibliographical references and index.

ISBN 978-1-118-49653-4 (cloth)

1. Wireless Internet— Security measures. I. Title.

TK5103.4885.R49 2013

004.67′8— dc23

2012040165

A catalogue record for this book is available from the British Library.

ISBN: 9781118496534

Preface

The mobile industry for wireless cellular services has grown at a rapid pace over the past decade. Similarly, Internet service technology has also made dramatic growth through the World Wide Web with a wire line infrastructure. Realization for complete mobile Internet technologies will become the future objectives for convergence of these technologies through multiple enhancements of both cellular mobile systems and Internet interoperability.

Flawless integration between these two wired/wireless networks will enable subscribers to not only roam worldwide but also solve the ever increasing demand for data/Internet services. However, the new technology development and service perspective of 4G systems will take many years to come. In order to keep up with this noteworthy growth in the demand for wireless broadband, new technologies and structural architectures are needed to improve system performance and network scalability greatly, while significantly reducing the cost of equipment and deployment. The present concept of P2P networking to exchange information needs to be extended to implement intelligent appliances such as a ubiquitous connectivity to the Internet services, the provision of fast broadband access technologies at more than 50 Mbps data rate, seamless global roaming, and Internet data/voice multimedia services.

The 4G system is a development initiative based on the currently deployed 2G/3G infrastructure, enabling seamless integration to emerging 4G access technologies. For successful interoperability, the path toward 4G networks should be incorporated with a number of critical trends to network integration. MIMO/OFDMA-based air interface for beyond 3G systems are called 4G systems such as Long Term Evolution (LTE), Ultra Mobile Broadband (UMB), Mobile WiMAX (Worldwide Interoperability for Microwave Access) or Wireless Broadband (WiBro).

Chapter 1 begins with a brief history of the Internet and describes topics covering (i) networking fundamentals such as LANs (Ethernet, Token Ring, FDDI), WANs (Frame Relay, X.25, PPP), and ATM; (ii) connecting devices such as circuit- and packet-switches, repeaters, bridges, routers, and gateways; (iii) the OSI model that specifies the functionality of its seven layers; and finally, (iv) a TCP/IP five-layer suite providing a hierarchical protocol made up of physical standards, a network interface, and internetworking.

Chapter 2 presents a state-of-the-art survey of the TCP/IP suite. Topics covered include (i) TCP/IP network layer protocols such as ICMP, IP version 4, and IP version 6 relating to the IP packet format, addressing (including ARP, RARP, and CIDR), and routing; (ii) transport layer protocols such as TCP and UDP; (iii) HTTP for the World Wide Web; (iv) FTP, TFTP, and NFS protocols for file transfer; (v) SMTP, POP3, IMAP, and MIME for e-mail; and (vi) SNMP for network management. This chapter also introduces latest Social Network Services and smart IT devices. With the introduction of smart services and devices, security problems became an issue. This chapter introduces security threats such as (i) Worm, Virus, and DDoS for network security; (ii) Phishing and SNS security for Internet security; (iii) Exploit, password cracking, Rootkit, Trojan Horse, and so on for computer security.

Chapter 3 presents the evolution and migration of mobile radio technologies from first generation (1G) to third generation (3G). 1G, or circuit-switched analog systems, consist of voice-only communications; 2G and beyond systems, comprising both voice and data communications, largely rely on packet-switched wireless mobile technologies. This chapter covers the technological development of mobile radio communications in compliance with each iterative generation over the past decade. At present, mobile data services have been rapidly transforming to facilitate and ultimately profit from the increased demand for nonvoice services. Through aggressive 3G deployment plans, the world's major operators boast attractive and homogeneous portal offerings in all of their markets, notably in music and video multimedia services. Despite the improbability of any major changes in the next 4–5 years, rapid technological advances have already bolstered talks for 3.5G and even 4G systems. For each generation, the following technologies are introduced:

1. 1G Cellular Technology AMPS (Advanced Mobile Phone System)NMT (Nordic Mobile Telephone)TACS (Total Access Communications System)
2. 2G Mobile Radio TechnologyCDPD (Cellular Digital Packet Data), North American protocolGSM (Global System for Mobile Communications)TDMA-136 or IS-54iDEN (Integrated Digital Enhanced Network)cdmaOne IS-95APDC (Personal Digital Cellular)i-modeWAP (Wireless Application Protocol)
3. 2.5G Mobile Radio TechnologyECSD (Enhanced Circuit-Switched Data)HSCSD (High-Speed Circuit-Switched Data)GPRS (General Packet Radio Service)EDGE (Enhanced Data rates for GSM Evolution)cdmaOne IS-95B
4. 3G Mobile Radio TechnologyUMTS (Universal Mobile Telecommunication System)HSDPA (High-Speed Downlink Packet Access)FOMACDMA2000 1xCDMA2000 1xEV (1x Evolution)CDMA2000 1xEV-DO (1x Evolution Data Only)CDMA2000 1xEV-DV (1x Evolution Data Voice)KASUMI Encryption Function

Chapter 4 deals with some of the important contemporary block cipher algorithms that have been developed over recent years with an emphasis on the most widely used encryption techniques such as Data Encryption Standard (DES), the International Data Encryption Algorithm (IDEA), the RC5 and RC6 encryption algorithms, and the Advanced Encryption Standard (AES). AES specifies an FIPS-approved Rijndael algorithm (2001) that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits. DES is not new, but it has survived remarkably well over 20 years of intense cryptanalysis. The complete analysis of triple DES-EDE in CBC mode is also included. Pretty Good Privacy (PGP) used for e-mail and file storage applications utilizes IDEA for conventional block encryption, along with RSA for public-key encryption and MD5 for hash coding. RC5 and RC6 are both parameterized block algorithms of variable size, variable number of rounds, and a variable-length key. They are designed for great flexibility in both performance and level of security.

Chapter 5 covers the various authentication techniques based on digital signatures. It is often necessary for communication parties to verify each other's identity. One practical way to do this is with the use of cryptographic authentication protocols employing a one-way hash function. Several contemporary hash functions (such as DMDC, MD5, and SHA-1) are introduced to compute message digests or hash codes for providing a systematic approach to authentication. This chapter also extends the discussion to include the Internet standard HMAC, which is a secure digest of protected data. HMAC is used with a variety of different hash algorithms, including MD5 and SHA-1. Transport Layer Security (TLS) also makes use of the HMAC algorithm.

Chapter 6 describes several public-key cryptosystems brought in after conventional encryption. This chapter concentrates on their use in providing techniques for public-key encryption, digital signature, and authentication. This chapter covers in detail the widely used Diffie–Hellman key exchange technique (1976), the Rivest-Schamir-Adleman (RSA) algorithm (1978), the ElGamal algorithm (1985), the Schnorr algorithm (1990), the Digital Signature Algorithm (DSA, 1991), and the Elliptic Curve Cryptosystem (ECC, 1985) and Elliptic Curve Digital Signature Algorithm (ECDSA, 1999).

Chapter 7 presents profiles related to a public-key infrastructure (PKI) for the Internet. The PKI automatically manages public keys through the use of public-key certificates. The Policy Approval Authority (PAA) is the root of the certificate management infrastructure. This authority is known to all entities at entire levels in the PKI and creates guidelines that all users, CAs, and subordinate policy-making authorities must follow. Policy Certificate Authorities (PCAs) are formed by all entities at the second level of the infrastructure. PCAs must publish their security policies, procedures, legal issues, fees, and any other subjects they may consider necessary. Certification Authorities (CAs) form the next level below the PCAs. The PKI contains many CAs that have no policy-making responsibilities. A CA has any combination of users and RAs whom it certifies. The primary function of the CA is to generate and manage the public-key certificates that bind the user's identity with the user's public key. The Registration Authority (RA) is the interface between a user and a CA. The primary function of the RA is user identification and authentication on behalf of a CA. It also delivers the CA-generated certificate to the end user. X.500 specifies the directory service. X.509 describes the authentication service using the X.500 directory. X.509 certificates have evolved through three versions: version 1 in 1988, version 2 in 1993, and version 3 in 1996. X.509 v3 is now found in numerous products and Internet standards. These three versions are explained in turn. Finally, Certificate Revocation Lists (CRLs) are used to list unexpired certificates that have been revoked. CRLs may be revoked for a variety of reasons, ranging from routine administrative revocations to situations where private keys are compromised. This chapter also includes the certification path validation procedure for the Internet PKI and architectural structures for the PKI certificate management infrastructure.

Chapter 8 describes the IPsec protocol for network layer security. IPsec provides the capability to secure communications across a LAN, across a virtual private network (VPN) over the Internet, or over a public WAN. Provision of IPsec enables a business to rely heavily on the Internet. The IPsec protocol is a set of security extensions developed by IETF to provide privacy and authentication services at the IP layer using cryptographic algorithms and protocols. To protect the contents of an IP datagram, there are two main transformation types: the Authentication Header (AH) and the Encapsulating Security Payload (ESP). These are protocols to provide connectionless integrity, data origin authentication, confidentiality, and an antireplay service. A Security Association (SA) is fundamental to IPsec. Both AH and ESP make use of an SA that is a simple connection between a sender and receiver, providing security services to the traffic carried on it. This chapter also includes the OAKLEY key determination protocol and ISAKMP.

Chapter 9 discusses Secure Socket Layer version 3 (SSLv3) and TLS version 1 (TLSv1). The TLSv1 protocol itself is based on the SSLv3 protocol specification. Many of the algorithm-dependent data structures and rules are very similar, so the differences between TLSv1 and SSLv3 are not dramatic. The TLSv1 protocol provides communications privacy and data integrity between two communicating parties over the Internet. Both protocols allow client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. The SSL or TLS protocol is composed of two layers: Record Protocol and Handshake Protocol. The Record Protocol takes an upper-layer application message to be transmitted, fragments the data into manageable blocks, optionally compresses the data, applies a MAC, encrypts it, adds a header, and transmits the result to TCP. Received data is decrypted to higher-level clients. The Handshake Protocol operated on top of the Record Layer is the most important part of SSL or TLS. The Handshake Protocol consists of a series of messages exchanged by client and server. This protocol provides three services between the server and client. The Handshake Protocol allows the client/server to agree on a protocol version, to authenticate each other by forming a MAC, and to negotiate an encryption algorithm and cryptographic keys for protecting data sent in an SSL record before the application protocol transmits or receives its first byte of data.

A keyed hashing message authentication code (HMAC) is a secure digest of some protected data. Forging an HMAC is impossible without knowledge of the MAC secret. HMAC can be used with a variety of different hash algorithms: MD5 and SHA-1, denoting these as HMAC-MD5 (secret, data) and SHA-1 (secret, data). There are two differences between the SSLv3 scheme and the TLS MAC scheme: TLS makes use of the HMAC algorithm defined in RFC 2104 and the TLS master-secret computation is also different from that of SSLv3.

Chapter 10 describes e-mail security. PGP, invented by Philip Zimmermann, is widely used in both individual and commercial versions that run on a variety of platforms throughout the global computer community. PGP uses a combination of symmetric secret-key and asymmetric public-key encryption to provide security services for e-mail and data files. PGP also provides data integrity services for messages and data files using digital signatures, encryption, compression (ZIP), and radix-64 conversion (ASCII Armor). With growing reliance on e-mail and file storage, authentication and confidentiality services are becoming increasingly important. Multipurpose Internet Mail Extension (MIME) is an extension to the RFC 822 framework that defines a format for text messages sent using e-mail. MIME is actually intended to address some of the problems and limitations of the use of SMTP. S/MIME is a security enhancement to the MIME Internet e-mail format standard, based on the technology from RSA Data Security. Although both PGP and S/MIME are on an IETF standards track, it appears likely that PGP will remain the choice for personal e-mail security for many users, while S/MIME will emerge as the industry standard for commercial and organizational use. The two PGP and S/MIME schemes are covered in this chapter.

Chapter 11 discusses the topic of firewalls and intrusion detection systems (IDSs) as an effective means of protecting an internal system from Internet-based security threats: Internet Worm, Computer Virus, and Special Kinds of Viruses. The Internet Worm is a standalone program that can replicate itself through the network to spread, so it does not need to be attached. It makes the network performance weak by consuming bandwidth, increasing network traffic, or causing the Denial of Service (DoS). Morris worm, Blaster worm, Sasser worm, and Mydoom worm are some examples of the most notorious worms. The Computer Virus is a kind of malicious program that can damage the victim computer and spread itself to another computer. The word “Virus” is used for most of malicious programs. There are special kind of viruses such as Trojan horse, Botnet, and Key Logger. Trojan horse (or Trojan) is made to steal some information by social engineering. The term Trojan horse is derived from Greek mythology. The Trojan gives a cracker remote access permission, like the Greek soldiers, avoiding detection of their user. It looks like some useful or helpful program, or a legitimate access process, but it just steals password, card number, or other useful information. The popular Trojan horses are Netbus, Back Orifice, and Zeus. Botnet is a set of zombie computers connected to the Internet. Each compromised zombie computer is called as bot, and the botmaster, called as C&C (Command & Control server), controls these bots. Key logger program monitors the action of the key inputs. The key logger is of two types: software and hardware. This chapter is concerned with the software type only. It gets installed in the victim computers and logs all the strokes of keys. The logs are saved in some files or sent to the hacker by network. Key logger can steal the action of key input by kernel level, memory level, API level, packet level, and so on.

A firewall is a security gateway that controls access between the public Internet and a private internal network (or intranet). A firewall is an agent that screens network traffic in some way, blocking traffic it believes to be inappropriate, dangerous, or both. The security concerns that inevitably arise between the sometimes hostile Internet and secure intranets are often dealt with by inserting one or more firewalls on the path between the Internet and the internal network. In reality, Internet access provides benefits to individual users, government agencies, and most organizations. But this access often creates a security threat. Firewalls act as an intermediate server in handling SMTP and HTTP connections in either direction. Firewalls also require the use of an access negotiation and encapsulation protocol such as SOCKS to gain access to the Internet, intranet, or both. Many firewalls support tri-homing, allowing the use of a DMZ network. To design and configure a firewall, it needs to be familiar with some basic terminology such as a bastion host, proxy server, SOCKS, choke point, DMZ, logging and alarming, and VPN. Firewalls are classified into three main categories: packet filters, circuit-level gateways, and application-level gateways. In this chapter, each of these firewalls is examined in turn. Finally, this chapter discusses screened host firewalls and how to implement a firewall strategy. To provide a certain level of security, the three basic firewall designs are considered: a single-homed bastion host, a dual-homed bastion host, and a screened subnet firewall.

An IDS is a device or software application that monitors network or system activities for malicious activities or policy violations and produces reports to a Management Station. Intrusion detection and systems are primarily focused on identifying possible incidents, logging information about them, and reporting attempts. In addition, organizations use IDSs for other purposes, such as identifying problems with security policies, documenting existing threats, and deterring individuals from violating security policies. IDSs have become a necessary addition to the security infrastructure of nearly every organization. Regarding IDS, this chapter presents a survey and comparison of various IDSs including Internet Worm/Virus detection.

IDSs are categorized as Network-Based Intrusion Detection System (NIDS), Wireless Intrusion Detection System (WIDS), Network Behavior Analysis System (NBAS), Host-Based Intrusion Detection System (HIDS), Signature-Based Systems and Anomaly-Based Systems. An NIDS monitors network traffic for particular network segments or devices and analyzes network, transport, and application protocols to identify suspicious activity.

NIDSs typically perform most of their analysis at the application layer, such as HTTP, DNS, FTP, SMTP, and SNMP. They also analyze activity at the transport and network layers both to identify attacks at those layers and to facilitate the analysis of the application layer activity (e.g., a TCP port number may indicate which application is being used). Some NIDSs also perform limited analysis at the hardware layer. A WIDS monitors wireless network traffic and analyzes its wireless networking protocols to identify suspicious activity. The typical components in a WIDS are the same as an NIDS: consoles, database servers (optional), management servers, and sensors. However, unlike an NIDS sensor, which can see all packets on the networks it monitors, a WIDS sensor works by sampling traffic because it can only monitor a single channel at a time. An NBAS examines network traffic or statistics on network traffic to identify unusual traffic flows. NBA solutions usually have sensors and consoles, with some products also offering management servers. Some sensors are similar to NIDS sensors in that they sniff packets to monitor network activity on one or a few network segments. Other NBA sensors do not monitor the networks directly, and instead rely on network flow information provided by routers and other networking devices. HIDS monitors the characteristics of a single host and the events occurring within that host for suspicious activity. Examples of the types of characteristics an HIDS might monitor are wired and wireless network traffic, system logs, running processes, file access and modification, and system and application configuration changes. Most HIDSs have detection software known as agents installed on the hosts of interest. Each agent monitors activity on a single host and if prevention capabilities are enabled, also performs prevention actions. The agents transmit data to management servers. Each agent is typically designed to protect a server, a desktop or laptop, or an application service. A signature-based IDS is based on pattern matching techniques. The IDS contains a database of patterns. Some patterns are well known by public program or domain, for example, Snort (http://www.snort.org/), and some are found by signature-based IDS companies. Using database of already found signature is much like antivirus software. The IDS tries to match these signatures with the analyzed data. If a match is found, an alert is raised. An Anomaly-Based IDS is a system for detecting computer intrusions and misuse by monitoring system activity and classifying it as either normal or anomalous. The classification is based on heuristics or rules, rather than patterns or signatures, and will detect any type of misuse that falls out of normal system operation. This is as opposed to signature-based systems that can only detect attacks for which a signature has previously been created.

Chapter 12 covers the SET protocol designed for protecting credit card transactions over the Internet. The recent explosion in e-commerce has created huge opportunities for consumers, retailers, and financial institutions alike. SET relies on cryptography and X.509 v3 digital certificates to ensure message confidentiality, payment integrity, and identity authentication. Using SET, consumers and merchants are protected by ensuring that payment information is safe and can only be accessed by the intended recipient. SET combats the risk of transaction information being altered in transit by keeping information securely encrypted at all times and by using digital certificates to verify the identity of those accessing payment details. SET is the only Internet transaction protocol to provide security through authentication. Message data is encrypted with a random symmetric key that is then encrypted using the recipient's public key. The encrypted message, along with this digital envelope, is sent to the recipient. The recipient decrypts the digital envelope with a private key and then uses the symmetric key to recover the original message. SET addresses the anonymity of Internet shopping by using digital signatures and digital certificates to authenticate the banking relationships of cardholders and merchants. The process of ensuring secure payment card transactions on the Internet is fully explored in this chapter.

Chapter 13 deals with 4G Wireless Internet Communications Technology including Mobile WiMAX, WiBro, UMB, and LTE. WiMAX is a wireless communications standard designed to provide high-speed data communications for fixed and mobile stations. WiMAX far surpasses the 30-m wireless range of a conventional Wi-Fi LAN, offering a metropolitan area network with a signal radius of about 50 km. The name WiMAX was created by the WiMAX Forum, which was formed in June 2001 to promote conformity and interoperability of the standard. Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries and is the basis of future revisions such as 802.16m-2011.

WiBro is a wireless broadband Internet technology developed by the South Korean telecoms industry. WiBro is the South Korean service name for IEEE 802.16e (mobile WiMAX) international standard. WiBro adopts TDD for duplexing, OFDMA for multiple access, and 8.75/10.00 MHz as a channel bandwidth. WiBro was devised to overcome the data rate limitation of mobile phones (for example, CDMA 1x) and to add mobility to broadband Internet access (for example, ADSL or WLAN). WiBro base stations will offer an aggregate data throughput of 30–50 Mbps per carrier and cover a radius of 1–5 km allowing for the use of portable Internet usage.

UMB was the brand name for a project within 3GPP2 (3rd Generation Partnership Project) to improve the CDMA2000 mobile phone standard for next-generation applications and requirements. In November 2008, Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favoring LTE instead. Like LTE, the UMB system was to be based on Internet (TCP/IP) networking technologies running over a next-generation radio system, with peak rates of up to 280 Mbps. Its designers intended for the system to be more efficient and capable of providing more services than the technologies it was intended to replace. To provide compatibility with the systems it was intended to replace, UMB was to support handoffs with other technologies including existing CDMA2000 1x and 1xEV-DO systems. However, 3GPP added this functionality to LTE, allowing LTE to become the single upgrade path for all wireless networks. No carrier had announced plans to adopt UMB, and most CDMA carriers in Australia, the United States, Canada, China, Japan, and South Korea have already announced plans to adopt either WiMAX or LTE as their 4G technology.

LTE, marketed as 4G LTE, is a standard for wireless communication of high-speed data for mobile phones and data terminals. It is based on the GSM/EDGE and UMTS/HSPA network technologies, increasing the capacity and speed using new modulation techniques. The standard is developed by the 3GPP. The world's first publicly available LTE service was launched by TeliaSonera in Oslo and Stockholm on 14 December 2009. LTE is the natural upgrade path for carriers with GSM/UMTS networks, but even CDMA holdouts such as Verizon Wireless, which launched the first large-scale LTE network in North America in 2010, and au by KDDI in Japan have announced they will migrate to LTE. LTE is, therefore, anticipated to become the first truly global mobile phone standard, although the use of different frequency bands in different countries will mean that only multiband phones will be able to utilize LTE in all countries where it is supported.

The scope of this book is adequate to span a one- or two-semester course at a senior or first-year graduate level. As a reference book, it will be useful to computer engineers, communications engineers, and system engineers. It is also suitable for self-study. The book is intended for use in both academic and professional circles, and it is also suitable for corporate training programs or seminars for industrial organizations as well as in research institutes. At the end of the book, there is a list of frequently used acronyms and a bibliography section.

About the Author

Man Young Rhee is an Endowed Chair Professor at the Kyung Hee University and has over 50 years of research and teaching experience in the field of communication technologies, coding theory, cryptography, and information security. His career in academia includes professorships at the Hanyang University (he also held the position of Vice President at this university), the Virginia Polytechnic Institute and State University, the Seoul National University, and the University of Tokyo. Dr. Rhee has held a number of high-level positions in both government and corporate sectors: President of Samsung Semiconductor Communications (currently, Samsung Electronics), President of Korea Telecommunications Company, Chairman of the Korea Information Security Agency at the Ministry of Information and Communication, President of the Korea Institute of Information Security & Cryptology (founding President), and Vice President of the Agency for Defense Development at the Ministry of National Defense. He is a Member of the National Academy of Sciences, a Senior Fellow at the Korea Academy of Science and Technology, and an Honorary Member of the National Academy of Engineering of Korea. His awards include the “Dongbaek” Order of National Service Merit and the “Mugunghwa” Order of National Service Merit, the highest grade honor for a scientist in Korea; the NAS Prize, the National Academy of Sciences; the NAEK Grand Prize, the National Academy of Engineering of Korea; and Information Security Grand Prize, KIISC. Dr. Rhee is the author of six books: Error Correcting Coding Theory (McGraw-Hill, 1989), Cryptography and Secure Communications (McGraw-Hill, 1994), CDMA Cellular Mobile Communications and Network Security (Prentice Hall, 1998), Internet Security (John Wiley, 2003), Mobile Communication Systems and Security (John Wiley, 2009), and Wireless Mobile Internet Security, Second Edition (John Wiley, 2013). Dr. Rhee has a B.S. in Electrical Engineering from the Seoul National University, as well as an M.S. in Electrical Engineering and a Ph.D. from the University of Colorado.

Acknowledgments

This book is the outgrowth of my teaching and research efforts in information security over the past 20 years at the Seoul National University and the Kyung Hee University. I thank all my graduate students, even if not by name. Special thanks go to Yoon Il Choi, Ho Cheol Lee, Ju Young Kim, and others of Samsung Electronics for collecting materials related to this book. Finally, I am grateful to my son Dr. Frank Chung-Hoon Rhee for editing and organizing the manuscript during my illness throughout the production process.

Chapter 1

Internetworking and Layered Models

The Internet today is a widespread information infrastructure, but it is inherently an insecure channel for sending messages. When a message (or packet) is sent from one web site to another, the data contained in the message are routed through a number of intermediate sites before reaching their destination. The Internet was designed to accommodate heterogeneous platforms so that people who are using different computers and operating systems can communicate. The history of the Internet is complex and involves many aspects—technological, organizational, and community. The Internet concept has been a big step along the path toward electronic commerce, information acquisition, and community operations.

Early ARPANET researchers accomplished the initial demonstrations of packet- switching technology. In the late 1970s, the growth of the Internet was recognized and subsequently a growth in the size of the interested research community was accompanied by an increased need for a coordination mechanism. The Defense Advanced Research Projects Agency (DARPA) then formed an International Cooperation Board (ICB) to coordinate activities with some European countries centered on packet satellite research, while the Internet Configuration Control Board (ICCB) assisted DARPA in managing Internet activity. In 1983, DARPA recognized that the continuing growth of the Internet community demanded a restructuring of coordination mechanisms. The ICCB was disbanded and in its place the Internet Activities Board (IAB) was formed from the chairs of the Task Forces. The IAB revitalized the Internet Engineering Task Force (IETF) as a member of the IAB. By 1985, there was a tremendous growth in the more practical engineering side of the Internet. This growth resulted in the creation of a substructure to the IETF in the form of working groups. DARPA was no longer the major player in the funding of the Internet. Since then, there has been a significant decrease in Internet activity at DARPA. The IAB recognized the increasing importance of IETF, and restructured to recognize the Internet Engineering Steering Group (IESG) as the major standards review body. The IAB also restructured to create the Internet Research Task Force (IRTF) along with the IETF.

Since the early 1980s, the Internet has grown beyond its primarily research roots, to include both a broad user community and increased commercial activity. This growth in the commercial sector brought increasing concern regarding the standards process. Increased attention was paid to making progress, eventually leading to the formation of the Internet Society in 1991. In 1992, the Internet Activities Board was reorganized and renamed the Internet Architecture Board(IAB) operating under the auspices of the Internet Society. The mutually supportive relationship between the new IAB, IESG, and IETF led to them taking more responsibility for the approval of standards, along with the provision of services and other measures which would facilitate the work of the IETF.

1.1 Networking Technology

Data signals are transmitted from one device to another using one or more types of transmission media, including twisted-pair cable, coaxial cable, and fiber-optic cable. A message to be transmitted is the basic unit of network communications. A message may consist of one or more cells, frames, or packets which are the elemental units for network communications. Networking technology includes everything from local area networks (LANs) in a limited geographic area such as a single building, department, or campus to wide area networks (WANs) over large geographical areas that may comprise a country, a continent, or even the whole world.

1.1.1 Local Area Networks (LANs)

A LAN is a communication system that allows a number of independent devices to communicate directly with each other in a limited geographic area such as a single office building, a warehouse, or a campus. LANs are standardized by three architectural structures: Ethernet, token ring, and fiber distributed data interface (FDDI).

Ethernet

Ethernet is a LAN standard originally developed by Xerox and later extended by a joint venture between Digital Equipment Corporation (DEC), Intel Corporation, and Xerox. The access mechanism used in an Ethernet is called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). In CSMA/CD, before a station transmits data, it must check the medium where any other station is currently using the medium. If no other station is transmitting, the station can send its data. If two or more stations send data at the same time, it may result in a collision. Therefore, all stations should continuously check the medium to detect any collision. If a collision occurs, all stations ignore the data received. The sending stations wait for a period of time before resending the data. To reduce the possibility of a second collision, the sending stations individually generate a random number that determinates how long the station should wait before resending data.

Token Ring

Token ring, a LAN standard originally developed by IBM, uses a logical ring topology. The access method used by CSMA/CD may result in collisions. Therefore, stations may attempt to send data many times before a transmission captures a perfect link. This redundancy can create delays of indeterminable length if traffic is heavy. There is no way to predict either the occurrence of collisions or the delays produced by multiple stations attempting to capture the link at the same time. Token ring resolves this uncertainty by making stations take turns in sending data.

As an access method, the token is passed from station to station in sequence until it encounters a station with data to send. The station to be sent data waits for the token. The station then captures the token and sends its data frame. This data frame proceeds around the ring and each station regenerates the frame. Each intermediate station examines the destination address, finds that the frame is addressed to another station, and relays it to its neighboring station. The intended recipient recognizes its own address, copies the message, checks for errors, and changes four bits in the last byte of the frame to indicate that the address has been recognized and the frame copied. The full packet then continues around the ring until it returns to the station that sent it.

Fiber Distributed Data Interface (FDDI)

FDDI is a LAN protocol standardized by the ANSI (American National Standards Institute) and the ITU-T. It supports data rates of 100 Mbps and provides a high-speed alternative to Ethernet and token ring. When FDDI was designed, the data rate of 100 Mbps required fiber-optic cable.

The access method in FDDI is also called token passing. In a token ring network, a station can send only one frame each time it captures the token. In FDDI, the token passing mechanism is slightly different in that access is limited by time. Each station keeps a timer which shows when the token should leave the station. If a station receives the token earlier than the designated time, it can keep the token and send data until the scheduled leaving time. On the other hand, if a station receives the token at the designated time or later than this time, it should let the token pass to the next station and wait for its next turn.

FDDI is implemented as a dual ring. In most cases, data transmission is confined to the primary ring. The secondary ring is provided in case of the primary ring's failure. When a problem occurs on the primary ring, the secondary ring can be activated to complete data circuits and maintain service.

1.1.2 Wide Area Networks (WANs)

A WAN provides long-distance transmission of data, voice, image, and video information over large geographical areas that may comprise a country, a continent, or even the world. In contrast to LANs (which depend on their own hardware for transmission), WANs can utilize public, leased, or private communication devices, usually in combination.

PPP

The Point-to-Point Protocol (PPP) is designed to handle the transfer of data using either asynchronous modem links or high-speed synchronous leased lines. The PPP frame uses the following format:

Flag field.

Each frame starts with a 1-byte flag whose value is 7E(0111 1110). The flag is used for synchronization at the bit level between the sender and receiver.

Address field.

This field has the value of FF(1111 1111).

Control field.

This field has the value of 03(0000 0011).

Protocol field.

This is a 2-byte field whose value is 0021(0000 0000 0010 0001) for TCP/IP (Transmission Control Protocol/Internet Protocol).

Data field.

The data field ranges up to 1500 bytes.

CRC.

This is a 2-byte cyclic redundancy check (CRC). CRC is implemented in the physical layer for use in the data link layer. A sequence of redundant bits (CRC) is appended to the end of a data unit so that the resulting data unit becomes exactly divisible by a predetermined binary number. At its destination, the incoming data unit is divided by the same number. If there is no remainder, the data unit is accepted. If a remainder exists, the data unit has been damaged in transit and therefore must be rejected.

X.25

X.25 is widely used, as the packet-switching protocol provided for use in a WAN. It was developed by the ITU-T in 1976. X.25 is an interface between data terminal equipment and data circuit terminating equipment for terminal operations at the packet mode on a public data network.

X.25 defines how a packet mode terminal can be connected to a packet network for the exchange of data. It describes the procedures necessary for establishing connection, data exchange, acknowledgment, flow control, and data control.

Frame Relay

Frame relay is a WAN protocol designed in response to X.25 deficiencies. X.25 provides extensive error-checking and flow control. Packets are checked for accuracy at each station to which they are routed. Each station keeps a copy of the original frame until it receives confirmation from the next station that the frame has arrived intact. Such station-to-station checking is implemented at the data link layer of the Open Systems Interconnect (OSI) model, but X.25 only checks for errors from source to receiver at the network layer. The source keeps a copy of the original packet until it receives confirmation from the final destination. Much of the traffic on an X.25 network is devoted to error-checking to ensure reliability of service. Frame relay does not provide error-checking or require acknowledgment in the data link layer. Instead, all error-checking is left to the protocols at the network and transport layers, which use the frame relay service. Frame relay only operates at the physical and data link layer.

Asynchronous Transfer Mode (ATM)

Asynchronous transfer mode (ATM) is a revolutionary idea for restructuring the infrastructure of data communication. It is designed to support the transmission of data, voice, and video through a high-data-rate transmission medium such as fiber-optic cable. ATM is a protocol for transferring cells. A cell is a small data unit of 53 bytes long, made of a 5-byte header and a 48-byte payload. The header contains a virtual path identifier (VPI) and a virtual channel identifier (VCI). These two identifiers are used to route the cell through the network to the final destination.

An ATM network is a connection-oriented cell switching network. This means that the unit of data is not a packet as in a packet-switching network, or a frame as in a frame relay, but a cell. However, ATM, like X.25 and frame relay, is a connection-oriented network, which means that before two systems can communicate,they must make a connection. To startup a connection, a system uses a 20-byte address. After the connection is established, the combination of VPI/VCI leads a cell from its source to its final destination.

1.2 Connecting Devices

Connecting devices are used to connect the segments of a network together or to connect networks to create an internetwork. These devices are classified into five categories: switches, repeaters, bridges, routers, and gateways. Each of these devices except the first one (switches) interacts with protocols at different layers of the OSI model.

Repeaters forward all electrical signals and are active only at the physical layer. Bridges store and forward complete packets and affect the flow control of a single LAN. Bridges are active at the physical and data link layers. Routers provide links between two separate LANs and are active in the physical, data link, and network layers. Finally, gateways provide translation services between incompatible LANs or applications, and are active in all layers.

Connection devices that interact with protocols at different layers of the OSI model are shown in Figure 1.1.

Figure 1.1 Connecting devices.

1.2.1 Switches

A switched network consists of a series of interlinked switches. Switches are hardware/software devices capable of creating temporary connections between two or more devices to the switch but not to each other. Switching mechanisms are generally classified into three methods: circuit switching, packet switching, and message switching.

Circuit switching creates a direct physical connection between two devices such as telephones or computers. Once a connection is made between two systems, circuit switching creates a dedicated path between two end users. The end users can use the path for as long as they want.

Packet switching is one way to provide a reasonable solution for data transmission. In a packet-switched network, data are transmitted in discrete units of variable-length blocks called

packets

. Each packet contains not only data, but also a header with control information. The packets are sent over the network node to node. At each node, the packet is stored briefly before being routed according to the information in its header.

In the datagram approach to packet switching, each packet is treated independently of all others as though it exists alone. In the virtual circuit approach to packet switching, if a single route is chosen between sender and receiver at the beginning of the session, all packets travel one after another along that route. Although these two approaches seem the same, there exists a fundamental difference between them. In circuit switching, the path between the two end users consists of only one channel. In the virtual circuit, the line is not dedicated to two users. The line is divided into channels and each channel can use one of the channels in a link.

Message switching is known as the

store and forwarding method

. In this approach, a computer (or a node) receives a message, stores it until the appropriate route is free, then sends it out. This method has now been phased out.

1.2.2 Repeaters

A repeater is an electronic device that operates on the physical layer only of the OSI model. A repeater boosts the transmission signal from one segment and continues the signal to another segment. Thus, a repeater allows us to extend the physical length of a network. Signals that carry information can travel a limited distance within a network before degradation of the data integrity due to noise. A repeater receives the signal before attenuation, regenerates the original bit pattern, and puts the restored copy back on to the link.

1.2.3 Bridges

Bridges operate in both the physical and the data link layers of the OSI model. A single bridge connects different types of networks together and promotes interconnectivity between networks. Bridges divide a large network into smaller segments. Unlike repeaters, bridges contain logic that allows them to keep separate the traffic for each segment. Bridges are smart enough to relay a frame toward the intended recipient so that traffic can be filtered. In fact, this filtering operation makes bridges useful for controlling congestion, isolating problem links, and promoting security through this partitioning of traffic.

A bridge can access the physical addresses of all stations connected to it. When a frame enters a bridge, the bridge not only regenerates the signal but also checks the address of the destination and forwards the new copy to the segment to which the address belongs. When a bridge encounters a packet, it reads the address contained in the frame and compares that address with a table of all the stations on both segments. When it finds a match, it discovers to which segment the station belongs and relays the packet to that segment only.

1.2.4 Routers

Routers operate in the physical, data link, and network layers of the OSI model. The Internet is a combination of networks connected by routers. When a datagram goes from a source to a destination, it will probably pass through many routers until it reaches the router attached to the destination network. Routers determine the path a packet should take. Routers relay packets among multiple interconnected networks. In particular, an IP router forwards IP datagrams among the networks to which it connects. A router uses the destination address on a datagram to choose a next-hop to which it forwards the datagram. A packet sent from a station on one network to a station on a neighboring network goes first to a jointly held router, which switches it over the destination network. In fact, the easiest way to build the Internet is to connect two or more networks with a router. Routers provide connections to many different types ofphysical networks: Ethernet, token ring, point-to-point links, FDDI, and so on.

The routing module receives an IP packet from the processing module. If the packet is to be forwarded, it should be passed to the routing module. It finds the IP address of the next station along with the interface number from which the packet should be sent. It then sends the packet with information to the fragmentation module. The fragmentation module consults the MTU table to find the maximum transfer unit (MTU) for the specific interface number.

The routing table is used by the routing module to determine the next-hop address of the packet. Every router keeps a routing table that has one entry for each destination network. The entry consists of the destination network IP address, the shortest distance to reach the destination in hop count, and the next router (next-hop) to which the packet should be delivered to reach its final destination. The hop count is the number of networks a packet enters to reach its final destination. A router should have a routing table to consult when a packet is ready to be forwarded. The routing table should specify the optimum path for the packet. The table can be either static or dynamic. A static table is one that is not changed frequently, but a dynamic table is one that is updated automatically when there is a change somewhere in the Internet. Today, the Internet needs dynamic routing tables.

A metric is a cost assigned for passing through a network. The total metric of a particular router is equal to the sum of the metrics of networks that comprise the route. A router chooses the route with the shortest (smallest value) metric. The metric assigned to each network depends on the type of protocol. The Routing Information Protocol (RIP) treats each network as 1 hop count. So if a packet passes through 10 networks to reach the destination, the total cost is 10 hop counts. The Open Shortest Path First (OSPF) protocol allows the administrator to assign a cost for passing through a network based on the type of service required. A route through a network can have different metrics (costs). OSPF allows each router to have several routing tables based on the required type of service. The Border Gateway Protocol (BGP) defines the metric totally differently. The policy criterion in BGP is set by the administrator. The policy defines the paths that should be chosen.

1.2.5 Gateways

Gateways operate over the entire range in all seven layers of the OSI model. Internet routing devices have traditionally been called gateways. A gateway is a protocol converter which connects two or more heterogeneous systems and translates among them. The gateway thus refers to a device that performs protocol translation between devices. A gateway can accept a packet formatted for one protocol and convert it to a packet formatted for another protocol before forwarding it. The gateway understands the protocol used by each network linked into the router and is therefore able to translate from one to another.

1.3 The OSI Model

The Ethernet, originally called the Alto Aloha network, was designed by the Xerox Palo Alto Research Center in 1973 to provide communication for research and development CP/M computers. When in 1976 Xerox started to develop the Ethernet as a 20 Mbps product, the network prototype was called the Xerox Wire. In 1980, when the Digital, Intel, and Xerox standard was published to make it a LAN standard at 10 Mbps, Xerox Wire changed its name back to Ethernet. Ethernet became a commercial product in 1980 at 10 Mbps. The IEEE called its Ethernet 802.3 standard CSMA/CD. As the 802.3 standard evolved, it has acquired such names as Thicknet (IEEE 10Base-5), Thinnet or Cheapernet (10Base-2), Twisted Ethernet (10Base-T), and Fast Ethernet (100Base-T).

The design of Ethernet preceded the development of the seven-layer OSI model. The OSI model was developed and published in 1982 by the International Organization for Standardization (ISO) as a generic model for data communication. The OSI model is useful because it is a broadly based document, widely available and often referenced. Since modularity of communication functions is a key design criterion in the OSI model, vendors who adhere to the standards and guidelines of this model can supply Ethernet-compatible devices, alternative Ethernet channels, higher-performance Ethernet networks, and bridging protocols that easily and reliably connect other types of data network to Ethernet.

Since the OSI model was developed after Ethernet and Signaling System #7 (SS7), there are obviously some discrepancies between these three protocols. Yet the functions and processes outlined in the OSI model were already in practice when Ethernet or SS7 was developed. In fact, SS7 networks use point-to-point configurations between signaling points. Due to the point-to-point configurations and the nature of the transmissions, the simple data link layer does not require much complexity.

The OSI reference model specifies the seven layers of functionality, as shown in Figure 1.2. It defines the seven layers from the physical layer (which includes the network adapters), up to the application layer, where application programs can access network services. However, the OSI model does not define the protocols that implement the functions at each layer. The OSI model is still important for compatibility, protocol independence, and the future growth of network technology. Implementations of the OSI model stipulate communication between layers on two processors and an interface for interlayer communication on one processor. Physical communication occurs only at layer 1. All other layers communicate downward (or upward) to lower (or higher) levels in steps through protocol stacks.

Figure 1.2 ISO/OSI model.

The following briefly describes the seven layers of the OSI model:

1. Physical layer. The physical layer provides the interface with physical media. The interface itself is a mechanical connection from the device to the physical medium used to transmit the digital bit stream. The mechanical specifications do not specify the electrical characteristics of the interface, which will depend on the medium being used and the type of interface. This layer is responsible for converting the digital data into a bit stream for transmission over the network. The physical layer includes the method of connection used between the network cable and the network adapter, as well as the basic communication stream of data bits over the network cable. The physical layer is responsible for the conversion of the digital data into a bit stream for transmission when using a device such as a modem, and even light, as in fiber optics. For example, when using a modem, digital signals are converted into analog-audible tones which are then transmitted at varying frequencies over the telephone line. The OSI model does not specify the medium, only the operative functionality for a standardized communication protocol. The transmission media layer specifies the physical medium used in constructing the network, including size, thickness, and other characteristics.
2. Data link layer. The data link layer represents the basic communication link that exists between computers and is responsible for sending frames or packets of data without errors. The software in this layer manages transmissions, error acknowledgment, and recovery. The transceivers are mapped data units to data units to provide physical error detection and notification and link activation/deactivation of a logical communication connection. Error control refers to mechanisms to detect and correct errors that occur in the transmission of data frames. Therefore, this layer includes error correction, so when a packet of data is received incorrectly, the data link layer makes system send the data again. The data link layer is also defined in the IEEE 802.2 logical link control specifications.
Data link control protocols are designed to satisfy a wide variety of data link requirements: High-Level Data Link Control (HDLC) developed by the ISO (ISO 3309, ISO 4335),Advanced Data Communication Control Procedures (ADCCP) developed by the ANSI (ANSI X3.66),Link Access Procedure, Balanced (LAP-B) adopted by the CCITT as part of its X.25 packet-switched network standard,Synchronous Data Link Control (SDLC) is not a standard, but is in widespread use. There is practically no difference between HDLC and ADCCP. Both LAP-B and SDLC are subsets of HDLC, but they include several additional features.
3. Network layer. The network layer is responsible for data transmission across networks. This layer handles the routing of data between computers. Routing requires some complex and crucial techniques for a packet-switched network design. To accomplish the routing of packets sending from a source and delivering to a destination, a path or route through the network must be selected. This layer translates logical network addressing into physical addresses and manages issues such as frame fragmentation and traffic control. The network layer examines the destination address and determines the link to be used to reach that destination. It is the borderline between hardware and software. At this layer, protocol mechanisms activate data routing by providing network address resolution, flow control in terms of segmentation, and blocking and collision control (Ethernet). The network layer also provides service selection, connection resets, and expedited data transfers. The IP runs at this layer.
The IP was originally designed simply to interconnect as many sites as possible without undue burdens on the type of hardware and software at different sites. To address the shortcomings of the IP and to provide a more reliable service, the TCP is stacked on top of the IP to provide end-to-end service. This combination is known as TCP/IP and is used by most Internet sites today to provide a reliable service.
4. Transport layer. The transport layer is responsible for ensuring that messages are delivered error-free and in the correct sequence. This layer splits messages into smaller segments if necessary, and provides network traffic control of messages. Traffic control is a technique for ensuring that a source does not overwhelm a destination with data. When data is received, a certain amount of processing must take place before the buffer is clear and ready to receive more data. In the absence of flow control, the receiver's buffer may overflow while it is processing old data. The transport layer, therefore, controls data transfer and transmission. This software is called TCP, common on most Ethernet networks, or System Packet Exchange (SPE), a corresponding Novell specification for data exchange. Today, most Internet sites use the TCP/IP protocol along with the Internet Control Message Protocol (ICMP) to provide a reliable service.
5. Session layer. The session layer controls the network connections between the computers in the network. The session layer recognizes nodes on the LAN and sets up tables of source and destination addresses. It establishes a handshake for each session between different nodes. Technically, this layer is responsible for session connection (i.e., for creating, terminating, and maintaining network sessions), exception reporting, coordination of send/receive modes, and data exchange.
6. Presentation layer. The presentation layer is responsible for the data format, which includes the task of hashing the data to reduce the number of bits (hash code) that will be transferred. This layer transfers information from the application software to the network session layer to the operating system. The interface at this layer performs data transformations, data compression, data encryption, data formatting, syntax selection (i.e., ASCII, EBCDIC, or other numeric or graphic formats), and device selection and control. It actually translates data from the application layer into the format used when transmitting across the network. On the receiving end, this layer translates the data back into a format that the application layer can understand.
7. Application layer. The application layer is the highest layer defined in the OSI model and is responsible for providing user-layer applications and network management functions. This layer supports identification of communicating partners, establishes authority to communicate, transfers information, and applies privacy mechanisms and cost allocations. It is usually a complex layer with a client/server, a distributed database, data replication, and synchronization. The application layer supports file services, print services, remote login, and e-mail. The application layer is the network system software that supports user-layer applications, such as Word or data processing, CAD/CAM, document storage, and retrieval and image scanning.

1.4 TCP/IP Model

A protocol is a set of rules governing the way data will be transmitted and received over data communication networks. Protocols are then the rules that determine everything about the way a network operates. Protocols must provide reliable, error-free communication of user data as well as a network management function. Therefore, protocols govern how applications access the network, the way that data from an application is divided into packets for transmission through cable, and which electrical signals represent data on a network cable.

The OSI model, defined by a seven-layer architecture, is partitioned into a vertical set of layers, as illustrated in Figure 1.2