CyberschuulNews.com

 
Talking IP Talking IP
by
'
Segun Sorunke

A column that discusses Internet Protocol for all to appreciate

 

Introduction to Cloud Computing (Part 2)

Cloud architecture is the systems architecture of the software systems involved in the delivery of cloud computing (e.g. hardware and software) as designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services. This is not unlike the UNIX philosophy of having multiple programs doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.

We have focused on cloud service providers whose data centres are external to the users of the service (businesses or individuals). These clouds are known as public clouds—both the infrastructure and control of these clouds is with the service provider. A variation on this scenario is the private cloud. Here, the cloud provider is respon­sible only for the infrastructure and not for the control. This setup is equivalent to a section of a shared data centre being partitioned for use by a specific customer. Note that the private cloud can offer SaaS, PaaS, or IaaS services, though IaaS might appear to be a more natural fit. An internal cloud is a relatively new term applied to cloud services provided by the IT department of an enterprise from the company’s own data centres. This setup might seem counterintuitive at first—why would a company run cloud services for its internal users when public clouds are available? Doesn’t this setup negate the advantages of elasticity and scalability by moving this service to inside the enter­prise?

 

Cloud Computing Logical Diagram

It turns out that the internal cloud model is very useful for enterprises. The biggest concerns for enterprises to move to an external cloud provider are security and control. CIOs are naturally cautious about moving their entire application infrastructure and data to an external cloud provider, especially when they have several person-years of in­vestment in their applications and infrastructure as well as elaborate security safeguards around their data. However, the advantages of the cloud—resiliency, scalability, and workload migration—are use­ful to have in the company’s own data centres. IT can use per-usage billing to monitor individual business unit or department usage of the IT resources and charge them back. Controlling server sprawl through virtualization and moving workloads to geographies and lo­cations in the world with lower power and infrastructure costs are of value in a cloud-computing environment. Internal clouds can provide all these benefits.

Architecture “ Cloud architecture is the systems architecture of the software systems involved in the delivery of cloud computing (e.g. hardware, software) as designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces (usually web services). This is very similar to the UNIX philosophy of having multiple programs doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts. Cloud architecture extends to the client where web browsers and/or software applications are used to access cloud applications. Cloud storage architecture is loosely coupled where metadata operations are centralised enabling data nodes

 

Looking at the layers, we can see that a cloud client is the layer that interacts directly with the user, and it is not unlike the application layer of our OSI Model. Basically, the cloud client consists    of computer hardware and/or software that relies on cloud computing for application delivery, or that is specifically designed for delivery of cloud services and that, in either case, is essentially useless without it.

 

The application layer is sub-divided into the user interface and the machine interface sub-layers. Here we find one of the basic benefits of cloud computing : nil software maintenance costs, as cloud application leverages cloud computing in software architecture, most often eliminating the need to install and run applications on the customer’s own computer. A cloud application influences the cloud model of the software architecture.

 

In Platform as a Service (PaaS) scenario, a cloud platform delivers computing platform and/or solution stack as a service, generally consuming cloud infrastructure and supporting cloud applications. In PaaS, you transfer more control to your cloud service provider. Since the platform used to build the service you require can scale transparently without any of your involvement other than the time of configuration, you save on the cost and complexity of buying and managing the underlying hardware and software necessary for your deployment. You do not need to understand the tier connectivity, bandwidth requirements, or how it all functions under the hood.

 

An Infrastructure as a Service (IaaS) provider will offer you “raw” computing, storage, and network infrastructure to enable you load your own software, including operating systems and applications, on to the infrastructure. This operation generally happens at the infrastructure level, where we see the sub-layer components of computer, network infrastructure, and storage services. IaaS generally offers the greatest control of the three models (PaaS, Iaas, and SaaS), but you need to know the resource parameters for your system, and you also inherit the responsibility for scaling and elasticity of your network! SaaS vendors have the highest level of control amongst the three models. The realisation of the network topology can be similar to existing data centres and scale up or down according to the number of users that are added.

 

Finally, we have the server layer, which consists of computer hardware and/or software that are specifically designed for the delivery of cloud services.

 

Security : Perhaps the greatest deterrent for IT managers from venturing into cloud computing is the problem of security and loss of control. One concern is that cloud providers themselves may have access to customers’ “unencrypted data”, whether it is on disk, in memory, or transmitted over the network. Therefore, for ease of mind of the IT manager, the cloud service provider’s security system and procedures will need to be as good or better than that which the enterprise use

 

Besides, infrastructure and data isolation must be assured between multiple tenants of the cloud service provider. Feeding into the paranoid fear of IT managers is the question of authentication mechanism (‘you are who you say you are”), which is required at both ends of the connection, both at the cloud user and cloud service provider levels. Usually, the cloud service provider and the user must agree on schemes such as authentication with digital certificates and certificate authorities.

 

Cloud computing is at an early stage, with a motley crew of providers, large and small, delivering a slew of cloud-based services, from full-blown applications to storage services to spam filtering. Today, for the most part, IT managers must plug into cloud-based services individually, but cloud computing aggregators and integrators are already emerging.  

 

 

Introduction to Cloud Computing (Part One)

Once in a while, an idea or usage term comes along that everyone feels they know about and are familiar with, but in actual fact have little knowledge of; such is the case with cloud computing. It has been an idea within the computing community for a while, and has various usages and methods; however, we shall attempt a definition of the term here that encompasses the essence of the technology.
 

Cloud computing, as defined by the US national Institute of Standards and technology, “is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (for example, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”. According to a paper published by the IEEE in 2008, “cloud computing is a paradigm in which information is permanently stored on servers on the Internet and cached temporarily on clients that include computers, laptops, handhelds, sensors, etc.”
 

Cloud computing is sometimes confused with grid computing (a form of distributed computing whereby a ‘super and virtual computer” is composed of a cluster of networked, loosely-coupled computers, working together to perform very large tasks), utility computing (the packaging of computing resources, such as computation and storage are provided as a measured service that have to be paid similar to a traditional public utility such as electricity), and autonomic computing (computer systems capable of self-management). Many cloud computing systems today are powered by grids, have autonomic characteristics, and are billed like utilities, but cloud computing can be seen as a natural next step from the grid-utility model.
 

 

Cloud computing is a systems architecture model for Internet-based computing; the cloud is a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams. Cloud computing enables users to utilise services without knowledge of, expertise in, nor control over the technology infrastructure that supports them. It is, almost literally, operating the service in a “cloud”! At this point, we need to distinguish cloud computing with Software as a service (SaaS). SaaS is simply a software-enabled service that is offered on the web, on a month to month subscription or on a pay-per-use basis, rather than having to purchase or license the software. Technically, SaaS does not have to be offered in a cloud, but given the nature of the SaaS business model, it is hard to conceive that running it in an environment other than a functional utility or cloud, makes much business sense in most instances. Most IT professionals see cloud computing as a natural evolution from SaaS. Cloud computing is a general concept that utilises SaaS, such as Web 2.0 and other technology trends, all of which depend on the Internet for satisfying users' needs.
 

Unlike the fixed functions offered by SaaS, Platform as a Service (PaaS) provides a software platform on which users can build their own applications and host them on the PaaS provider’s infrastructure. The software platform is used as a development framework to build, debug, and deploy applications. It often provides middleware-style services such as database and component services for use by applica­tions. PaaS is a true cloud model in that applications do not need to worry about the scalability of the underlying platform (hardware and software). When enterprises write their application to run over the PaaS provider’s software platform, the elasticity and scalability is guaranteed transparently by the PaaS platform. 
 

One of the oldest iterations of cloud computing is the Managed service Provider (MSP), which is basically an application exposed to IT rather than to end-users, such as a virus scanning service for email or an application monitoring service. Managed security services fall into the category of MSP, as do cloud-based anti-spam services, as well as desktop management services etc.  

 

From an infrastructure perspective, cloud computing is very similar to hosted services—a model established several years ago. In hosted services, servers, storage, and networking infrastructure are shared across multiple tenants and over a remote connection with the ability to scale (although scaling is done manually by calling or e-mailing the hosting provider). Cloud computing is different in that it offers a pay-per-use model and rapid (and automatic) scaling up or down of resources along with workload migration. Interestingly, some analysts group all hosted services under cloud computing for their market numbers.

 

Cloud computing comes into focus when you think about what IT always needs : a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software! Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT’s existing capabilities. The following is a list of characteristics of a cloud-computing envi­ronment. Not all characteristics may be present in a specific cloud solution.
 

·         Elasticity and scalability: Cloud computing gives you the ability to expand and reduce resources according to your specific service requirement. For example, you may need a large number of server resources for the duration of a specific task. You can then release these server resources after you complete your task.

·         Pay-per-use: You pay for cloud services only when you use them, either for the short term (for example, for CPU time) or for a longer duration (for example, for cloud-based storage or vault services).

·         On demand: Because you invoke cloud services only when you need them, they are not permanent parts of your IT infrastruc­ture—a significant advantage for cloud use as opposed to internal IT services. With cloud services there is no need to have dedicated resources waiting to be used, as is the case with internal services.

·         Resiliency: The resiliency of a cloud service offering can completely isolate the failure of server and storage resources from cloud users. Work is migrated to a different physical resource in the cloud with or without user awareness and intervention.

·         Multitenancy:  Public cloud services providers often can host the cloud services for multiple users within the same infrastructure. Server and storage isolation may be physical or virtual - depending upon the specific user requirements.

·         Workload movement: This characteristic is related to resiliency and cost considerations. Here, cloud-computing providers can migrate workloads across servers - both inside the data center and across data centers (even in a different geographic area). This migration might be necessitated by cost (less expensive to run a workload in a data center in another country based on time of day or power requirements) or efficiency considerations (for example, network bandwidth). A third reason could be regulatory considerations for certain types of workloads.

 

A major attraction of cloud computing is the avoidance of capital expenditure (CapEx) on hardware, software, training, and services, as they pay a provider only for what they need and what they use. In actual fact, capital expenditure may easily be converted to operating expenditure (OpEx) in a thoughtfully implemented cloud competing scenario. Because there is multi-tenancy, which enables sharing of resources and costs across a large pool of users, there are the additional advantages of centralisation of infrastructure in locations with lower costs; additionally, there is device and location independence, as resources can be accessed from anywhere with an internet connection. Reliability is also improved through the use of multiple redundant sites, which makes cloud computing suitable for business continuity and disaster recovery.
 

 

Of course, a major attraction of cloud computing is scalability; this is available through dynamic (on-demand) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. This comes with security that is centralised and optimised for the needs of the client.
 

 

Supercomputers today are used mainly by the military, Government intelligence services, universities and research labs, and large companies to tackle enormously complex calculations for such tasks as simulating nuclear explosions, predicting climate change, designing airplanes, and analysing which proteins in the body are likely to bind with potential news drugs. Cloud computing aims to provide that kind of power – measured in the trillions of computations per second – to problems like analysing risks in financial portfolios, delivering personalised medical information, mapping the human gene etc, in a way that users can tap into through the web. It does that by networking large group of servers that often use low-cost consumer PC technology, with specialised connections to spread data processing chores across them.
 

 

In our next paper, we will look at the architecture of cloud computing.
 

 

 

End-to-End (E2E) Security

 

In the last several years, the Internet has grown rapidly beyond servers, desktops and laptops to include handheld devices like PDAs and smart phones. There is now a growing realization that this trend will continue as increasing numbers of even simpler, more constrained devices (sensors, home appliances, personal medical devices) get connected to the Internet. With more users, more applications and more revenue depending on Web resources, it is more important than ever before to provide remote user access while protecting the enterprise's resources. Mission-critical Web resources often need centralized user administration and control delegation to be effective in today's enterprises with multiple administrative domains and quick response to market changes. However, hackers, crackers, internal attacks and business evolution will always be a fact of life and, as a result, security threats, leaks and lack of scale will constantly plague user access control solutions based on password lists, access control databases, and shared secrets.

 

The business problem of IT security is, however, worse than all the technical problems. Because current user access control solutions involve different components for authentication, authorization and administration (AAA), a solution can fail for those many reasons. For example, a required upgrade of one component may no longer interoperate with another one, which may alienate users and lead to lost business in addition to security breaches. 

The continuous, onerous cycles of development and maintenance not only make IT security more expensive than it would look at first sight, but they also make IT security solutions less secure by increasing the number and extent of security gaps that may exist at any time. Security gaps represent the weak areas that could be attacked in an IT system.

In a broad generalization, two types of attacks can exploit security gaps: network and data attacks. A network attack tries to interfere with client and/or server systems that participate in a transaction, in terms of their communication processes. A network attack, for example, may try to gain or deny access, read files, or insert information or code that affects communication. On the other hand, a data attack tries to tamper with and/or read data in the files or messages, as they are stored or exchanged in a system, for example by inserting false data, by deleting or changing data or by reading the data. 

 

For IT professionals, security of the network is of prime importance, especially in these days of 24-hour Internet access on a plethora of connections with no global administration of any kind. In the field of computer security, end-to-end security starts on the client and ends on the server, and rather than relying on transport-level security (such as secure Socket layer (SSL)} end-to-end security puts the power of strong security in the hands of the end user. Secure Sockets Layer (SSL) is the most commonly used security protocol on the Internet today. It is built into many popular applications, including all well-known web browsers, and is widely trusted to secure sensitive transactions including on-line banking, stock trading and e-commerce. SSL combines public-key cryptography for key-distribution and authentication with symmetric-key cryptography for data encryption and integrity. Public-key cryptography is widely believed to be beyond the capabilities of embedded devices. This perception is primarily driven by earlier experiments involving C-language implementations of RSA, today’s dominant public-key cryptosystem.

 

SSL offers encryption, source authentication and integrity protection for data and is flexible enough to accommodate different cryptographic algorithms for key agreement, encryption and hashing. The two main components of SSL are the Handshake protocol and the Record Layer protocol. The Handshake protocol allows an SSL client and server to negotiate a common cipher suite, authenticate each other, and establish a shared master secret using public-key algorithms. The Record Layer derives symmetric keys from the master secret and uses them with faster symmetric-key algorithms for bulk encryption and authentication of application data.

End-to-end security relies on protocols and mechanisms that are implemented exclusively on the endpoints of a connection, such as an HTTPS connection (based, for example on Transport layer Security (TLS) to a web server) or IP Security (IPSec). End-to-End (E2E) IT security is defined as "safeguarding information in a secure telecommunication system by cryptographic or protected distribution system means from point of origin to point of destination." The vision is that security needs to possess an end-to-end property; otherwise, security breaches are possible at the interfaces, which build gaps.

The traditional definition of an endpoint is a client or server. In this definition end-to-end security starts on the client and ends on the server. Given the multitude of applications running in parallel on an operating system, and given increasing virtualization, this definition is usually no longer precise enough. The operating system can estab­lish a security association on either the session or the application level. It can also be terminated on a front end, on behalf of numerous servers, as is the case in many TLS deployments.

For secure communication, end-to-end security is absolutely essential, and the system is composed of a number of constituent parts including identity, protocols, algorithms, secure implementation, and secure operations. Looking at each component of the system, we find as follows:

·         Identity : This component encompasses known and verifiable entity identities on both ends; note that an identity can be temporary for a connection. For example, a user often is identified by username and password, whereas a server may be identified through a server certificate.

·         Protocols (for example, TLS and IPsec) : Protocols are used to dynamically negotiate session keys, and to provide the required security functions (for example, encryption and integrity verifi­cation) for a connection. Protocols use algorithms to implement these functions.

·         Algorithms (for example, Advanced Encryption Standard [AES], Triple Digital Encryption Standard [3DES], and Secure Hash Algorithm [SHA-1]): These algorithms use session keys to protect data in transit, for example through encryption or integrity checks.

·         Secure implementation: The endpoint (client or server) that runs one of these protocols mentioned previously must be free of bugs that could compromise security. Web browser security is relevant here. In addition, malware can compromise security, for example by log­ging key strokes on a PC.

·         Secure operation:  Users and operators have to understand the se­curity mechanisms, and how to deal with exceptions. For example, web browsers warn about invalid server certificates, but users can override the warning and still make the connection. This concern is a nontechnical one, but is of critical concern today.

For full end-to-end security, all of these components must be secure. In networks with end-to-end security, both ends can typically (de­pending on the protocols and algorithms used) rely on the fact that their communication is not visible to anyone else, and that no one else can modify the data in transit. End-to-end security is used suc­cessfully today, for example, in online banking applications. Correct and complete end-to-end security is required; without it, many ap­plications such as online banking would not be possible.  However, a single security problem in any of the components can compromise the overall security for a connection. Today, most criti­cal are implementation problems on endpoints, as well as human errors, specifically in handling exception cases.

Solutions that rely exclusively on end-to-end security have many po­tential problems, which fall into two broad categories: (a) those that affect the end user and (b) those that affect the network operator (the service provider, or the enterprise network operator, for example).

 

The End User : Although protocols and algorithms in use tend to be secure and reliable, the main problems lie in the two main areas of endpoint security (secure implementation component) and lack of user educa­tion (secure operation component).

 

Endpoint security concerns include the presence of malware, as well as bugs in software. Even security professionals have difficulty deter­mining whether a PC contains malware. Such malware can control the connection before it is secured, thereby achieving the ability to see the data, as well as potentially change it in real time. Although endpoint security software such as antivirus solutions as well as zero-day prevention solutions provides good security, they are not always installed, and antivirus software is often not up-to-date. Users also can temporarily disable the solutions. Therefore, the presence of mal­ware remains a security concern. Bugs in software are also relevant, for example in the web browser or the operating system.

 

The lack of user education is the other important concern on the endpoint: Users must know how to identify a secured connection, for example by the little padlock in a web browser (although not even this security mechanism is completely secure). They must also know how to deal with exceptions such as expired or invalid certificates. Most average users do not entirely understand all these details, lead­ing to breaches of security.

 

The Network Operator : In this regard, the network operator may be a service provider or an enterprise network administrator; and their nature is always to be distrustful of anything outside their autonomous system. Today, most enterprise network operators as well as service providers are sceptical about the ubiquitous use of end-to-end security solu­tions. The fundamental concern is that the endpoints generally cannot be trusted. The network operator, whether enterprise, university, or service provider, has an obligation to enforce certain policies on the endpoint, for example, to ensure that it does not spread worms, send spam mail, or attack servers. If, however, network operators cannot “see” the traffic of an endpoint because it is end-to-end secured, then they cannot comply with their obligations to control the endpoints. From a network operator’s perspective, it is therefore not generally desirable to use end-to-end security

End-to-end security protocols and solutions are an essential corner­stone in network security. We cannot live without them. However, it is unrealistic in today’s networks to assume that end-to-end security solutions alone will suffice. The fundamental underlying problem is that typically the network operator, where a PC is attached, has a need and often an obligation to monitor the behavior of the end­point, and to control malicious activities emerging from that PC. All solutions to control endpoints, however, are by definition network-based. Therefore, network-based security mechanisms are also an essential component of overall network security: Overall security requires both endpoint security and network-based security.

 

 

 

Introduction to IPv6 Packet Format and Addressing

Some papers back, we looked at different aspects of the Wireless LAN, and over three papers, we were able to establish the broad outlines of how each work, seamlessly, with each other. The Holy Grail for wireless network engineers is being able to move around with any device and being able to maintain, not new connection, but have their connection follow them wherever they go! Now, this happens in some limited way in most wireless networks, but the seamless “follow-me” connection is still not available, even in the most technologically advanced and Internet-enabled countries!

In traditional cellular networks, handoff of a terminal from one base station to another is a critical function to support mobile devices. Since such handoff is handled primarily at network Layers 3 and 4, it is not directly supported by IEEE 802 standards, which specify only layers 1 and 2. As handoff is becoming increasingly important for 802 wireless standards, there is the need to build a standard with which 802 standards will interface with higher-level mechanisms and thereby support handoff specified at higher levels. Since the focus is on direct support of higher layer functionality, the general approach is that a single 802 handoff interface might be implemented for all 802 devices, and this would allow handoff among a mixed 802 network as well to non-802 systems. This then is the focus of the emerging 802.21 standard, being developed through its working group. Recently the 802.21 working group finalised the first standard for dealing with handover in heterogeneous networks, and this standard is technically referred to as Media Independent handovers (MIH).

Most popular devices now ship with as many wireless facilities as the manufacturers can cram into it, limited by space, cost, and weight; personal digital assistants (PDA) and smartphones, for example, are increasingly supporting communications through both cellular and Wireless LAN technologies. Today, most laptops come with built-in Ethernet, Wi-Fi, and Bluetooth as standard, and as multi-access devices proliferate, we continue to move closer to that network that allows us to move from one technology to another, wirelessly and seamlessly. This technology of the future is referred to as “beyond 3G” or B3G for short.

Today’s 3G communications is made possible by better cell capacities, increased data rates, transparent mobility within larger geographical areas, and global reachability; but all these is done within the same access technology, be it cellular or WLAN. In B3G, multiple radio operation will be the norm, using multiple wireless technologies, with more capable devices, while evolving usage models. MIH provides a framework for efficiently discovering networks in range and executing intelligent heterogeneous handovers, based on their respective capabilities and current link conditions. The standard provides information to allow multi radio technology handover, including GSM, GPRS, Wi-Fi, Bluetooth, 802.11 and 802.16.

The Case for MIH : With the increasing availability of 3G technologies, which is evidenced by the rapid increase in the amount of data traffic over cellular networks, as well as availability of campus-wide networks in most countries (including city-wide WLANs in some advanced countries),  the expectation is that the current increase in data communication requirement over most networks will continue. This is expected to be strengthened by the introduction of High-Speed Packet access (HSPA) in the coming years. MIH was developed to resolve many problems, but the major ones include :

·          Incorrect network selection: most devices were hopeless in their ability to choose the right connection. You could often connect at L2, but not at the network layer. The PC for example, would often connect to the wrong one of many available APs, based on signal strength criteria alone.

·          Increasing number of interfaces on devices: most devices today sport a large number of interfaces to get the best connection whenever possible, but they do not necessarily talk to each other across technologies and radio configurations.

The fact is that as the technology stands today, if you have a device with multiple interfaces (GSM, GPRS, Wi-Fi, Bluetooth, 802.11 and 802.16), that is capable of connecting to  more than one network at a time, the moment you disconnect from one network and switch to another, everything stops! In working on an acceptable handover technology, it is important that we understand the particular type of handover that is of interest for 802.21 networks. Handover (or handoff, as used interchangeably in this paper) is mainly of two types, namely :

1.       Homogenous handover – this is also referred to as horizontal handover, and happens within a single network, providing localized mobility. It provides limited opportunities for mobility.

2.       Heterogeneous handover – this is referred to as vertical handover, and works across different networks, providing global mobility.

From the foregoing, it will be obvious that 802.21 is primarily concerned with vertical handover, although it can also be used for homogenous handovers.

Reference Model : IEEE 802.21 facilitates a variety of handover methods, including both hard handover and soft handover. A hard handover, also called a “break-before-make” handover, is one in which the channel in the source cell is released and only then will the destination channel be engaged. Typically, hard handover implies an abrupt switch between two access points, base stations, or, generally speaking, PoA’s. A soft handover is one in which the channel in the source cell is retained and used for a while in parallel with the channel in the destination cell. This soft (also called “make-before-break”) handover ensures that the connection to the target is established before the connection to the source is broken.

It is important to note that hard handovers are expected to be instantaneous and, depending on service requirements and application traffic patterns, hard handovers often go unnoticed. As an example, web browsing and audio/video streaming with prebuffering can be accommodated when handing over between different PoA’s in the range of one network by employing mechanisms that allow transferring the node connection context from one PoA to another quickly.

The main design elements of IEEE 802.21 can be classified into three categories:

a.       a framework for enabling transparent service continuity while handing over between heterogeneous technologies;

b.       a set of handover-enabling functions; and

c.       a set of Service Access Points (SAP’s)

In (a) above, you will find handover initiation services, which searches for new links and works through network discovery to network selection up to network negotiation. For services at (b) above, 802.21 provides handover preparation services, setting up new links and providing Layer 2 as well as IP connectivity. In general, 802.21 helps with handover initiation, network selection, and interface activation. It also provides components to other handover standards.

 

IEEE 802.21 specifies a framework that enables transparent service continuity while a mobile node switches between heterogeneous access technologies. In the process of providing transparent access continuity, 802.21 specifies mechanisms to gather all necessary information required for an affiliation with a new access point before breaking up the currently used connection. This requires that new link is searched, available networks discovered, appropriate network selected, and handover is negotiated between the two (source and destination) networks.

 

IEEE 802.21 introduces a new logical entity called Media Independent Handover Function (MIHF) that resides between the link layer and network layer, that provides, among others, abstracted services to entities residing at the network layer level and above, called MIH Users (MIHU’s). MIHU’s are expected to make handover and link-selection decisions based on their internal policies, context, and information provided by the MIHF. To this end, the primary role of the MIHF is to assist in handovers and handover decision making by providing all necessary information to the network selection or mobility management entities. The management mobility management entities are responsible for handover decisions regardless of the entity position in the network. The MHIF is not meant to make any decisions with respect to network selection. In summary, MIHF provides smart triggers (triggers minimizes connectivity disruption during link switching), handover messages, and information services.

 

Service Access Points : MIHF basically provides three services, namely Media Independent Event service (MIES), media Independent Command Service (MICS), and Media Independent Information Service (MIIS). SAPs with associated primitives between the MIHF and MIHU’s thus gives MIHU’s access to the services provided by MIHF.

 

MIES provides event reporting about things like link status and link quality.

 

MICS enables MIHU’s to manage and control the parameters related to link behaviours and handovers.

 

MIIS allows MIHUs to receive static information about the characteristics and services of the serving network and other available networks in range.

 

 

While this introductory paper may sound rather esoteric to some of us, it will not be too far down the road when we will see consumers demanding for the facility to roam not just from one 802-based network to another (horizontal roaming), but also between 802-based network and 3G cellular network (vertical roaming).

 

In the near future, we expect 802.21, when fully standardized and completed, will , for example, allow a user to unplug from an 802.3 network and get handed off to an 802.11 network; or a cellular phone user can, in the midst of a call, enter an 802.11 network hotspot and be seamlessly get handed off from a GSM network to the 802.11 network and back again when leaving the hotspot.

 

 

Introduction to IPv6 Packet Format and Addressing


One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a numeric identifier assigned to each machine on an IP network. It designates the location of the device it is assigned to on the network. This type of address is a software address, not a hardware address, which is hard-coded in the machine or network interface card.


A packet is a generic term for a bundle of data, usually in binary form, organized in a specific way, for transmission. A packet consists of the data to be transmitted and certain control information, with the three principal elements of a packet including (a) the header – control information such as synchronizing bits, address of originating device, length of packet, etc.,(b) payload – the data to be transmitted; the payload may be fixed length (as in X25 and ATM) or variable length (as in Ethernet or Frame Relay), and (c) trailer – end of packet, and error detection bits.


IPv6 not only provides more IP addresses than its predecessor, IPv4, but also has much larger addresses. The relatively large size of the IPv6 address is designed to be subdivided into hierarchical routing domains that reflect the topology of the modern-day Internet. The use of 128 bits provides multiple levels of hierarchy and flexibility in designing hierarchical addressing and routing. The Ipv4-based Internet currently lacks this flexibility. The architecture of IPv6 addressing is described in RFC 2373.


An IPv6 packet is a block of data that contains a header and a payload. The header is the information necessary to deliver the packet to a destination address; the payload is the data that you want to deliver. IPv6 packets can use a standard or an extended format. IPv6 packet is composed of two main parts: the header and the payload. The header is in the first 40 octets (320 bits) of the packet and contains:
 Version - version 6 (4-bit IP version).
 Traffic class - packet priority (8-bits). Priority values subdivide into ranges: traffic where the source provides congestion control and non-congestion control traffic.
 Flow label - QoS management (20 bits). Originally created for giving real-time applications special service, but currently unused.
 Payload length - payload length in bytes (16 bits). When cleared to zero, the option is a "Jumbo payload" (hop-by-hop).
 Next header - Specifies the next encapsulated protocol. The values are compatible with those specified for the IPv4 protocol field (8 bits).
 Hop limit - replaces the time to live field of IPv4 (8 bits).
 Source and destination addresses - 128 bits each.


The payload can have a size of up to 64KiB in standard mode, or larger with a "jumbo payload" option.
Fragmentation is handled only in the sending host in IPv6: routers never fragment a packet, and hosts are expected to use PMTU discovery. The protocol field of IPv4 is replaced with a Next Header field. This field usually specifies the transport layer protocol used by a packet's payload.


In the presence of options, however, the Next Header field specifies the presence of an extra options header, which then follows the IPv6 header; the payload's protocol itself is specified in a field of the options header. This insertion of an extra header to carry options is analogous to the handling of AH and ESP in IPSec for both IPv4 and IPv6.


In IPv6, extension headers are used to encode optional Internet-layer information. Extension headers are placed between the IPv6 header and the upper-layer header in a packet. IPv6 allows you to chain extension headers together by using the next header field. The next header field, located in the IPv6 header, indicates to the router which extension header to expect next. If there are no more extension headers, the next header field indicates the upper-layer header (TCP header, UDP header, ICMPv6 header, an encapsulated IP packet, or other items).


Addressing - 128-bit length
IPv6 addresses are 128-bits long and are identifiers for individual interfaces and sets of interfaces. IPv6 Addresses of all types are assigned to interfaces, not nodes. Since each interface belongs to a single node, any of that node's interfaces' unicast addresses may be used as an identifier for the node. A single interface may be assigned multiple IPv6 addresses of any type.
There are three types of IPv6 addresses. These are unicast, anycast, and multicast. Unicast addresses identify a single interface. Anycast addresses identify a set of interfaces such that a packet sent to an anycast address will be delivered to one member of the set. Multicast addresses identify a group of interfaces, such that a packet sent to a multicast address is delivered to all of the interfaces in the group. There are no broadcast addresses in IPv6, their function being superseded by multicast addresses.


The IPv6 128-bit address is divided along 16-bit boundaries. Each 16-bit block is then converted to a 4-digit hexadecimal number, separated by colons. The resulting representation is called colon-hexadecimal. This is in contrast to the 32-bit IPv4 address represented in dotted-decimal format, divided along 8-bit boundaries, and then converted to its decimal equivalent, separated by periods.


IPv6 supports addresses that are four times the number of bits as IPv4 addresses (128 vs. 32). This is 4 Billion times 4 Billion times 4 Billion (2^^96) times the size of the IPv4 address space (2^^32). This works out to be 340,282,366,920,938,463,463,374,607,431,768,211,456
This is an extremely large address space. In a theoretical sense this is approximately 665,570,793,348,866,943,898,599 addresses per square meter of the surface of the planet Earth (assuming the earth surface is 511,263,971,197,990 square meters).


The leading bits in the address indicate the specific type of IPv6 address. The variable-length field comprising these leading bits is called the Format Prefix (FP). The length of network addresses emphasizes a most important change when moving from IPv4 to IPv6. IPv6 addresses are 128 bits long (as defined by RFC 4291), whereas IPv4 addresses are 32 bits; where the IPv4 address space contains roughly 4 billion addresses, IPv6 has enough room for 3.4×1038 unique addresses.


IPv6 addresses are typically composed of two logical parts: a 64-bit (sub-) network prefix, and a 64-bit host part, which is either automatically generated from the interface's MAC address or assigned sequentially. Because the globally unique MAC addresses offer an opportunity to track user equipment, and so users, across time and IPv6 address changes, RFC 3041 was developed to reduce the prospect of user identity being permanently tied to an IPv6 address, thus restoring some of the possibilities of anonymity existing at IPv4. RFC 3041 specifies a mechanism by which time-varying random bit strings can be used as interface circuit identifiers, replacing unchanging and traceable MAC addresses.


Notation


The IPv6 networks are denoted by Classless Inter Domain Routing (CIDR) notation. A network or subnet using the IPv6 protocol is denoted as a contiguous group of IPv6 addresses whose size must be a power of two.IPv6 addresses are normally written as eight groups of four hexadecimal digits, where each group is separated by a colon (:). For example, 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is a valid IPv6 address. To shorten the writing and presentation of addresses, several simplifications to the notation are permitted. (Note : IPv4 implementations commonly use a dotted decimal representation of the network prefix known as the subnet mask. A subnet mask is not used in IPv6. Only prefix-length notation is used.)

Any leading zeros in a group may be omitted; thus, the given example becomes 2001:db8:85a3:0:0:8a2e:370:7334. One or any number of consecutive groups of 0 value may be replaced with two colons (::):2001:db8:85a3:: 8a2e:370:7334.This substitution with double-colon may be performed only once in an address, because multiple occurrences would lead to ambiguity
The sequence of the last 4 bytes of the IPv6 address may optionally be written in dot-decimal notation, in the style of IPv4 addresses. This notation is convenient when working in a mixed (dual-stack) environment of IPv4 and IPv6 addresses, and IPv6 addresses are derived from IPv4 ones. The general form of the notation is x:x:x:x:x:x:d.d.d.d, where the x's are the 6 high-order groups of hexadecimal digits and the d's represent the decimal digit groups of the four low-order octets of the address. RFC 4291 (IP Version 6 Addressing Architecture) provides additional information.


IPv6 address types
IPv6 addresses are classified into three types:[13]
 Unicast addresses
A unicast address identifies a single network interface. The protocol delivers packets sent to a unicast address to that specific interface. Unicast IPv6 addresses can have a scope that is reflected in more specific address names: global unicast address, link-local address, and unique local unicast address.
 Anycast addresses
An anycast address is assigned to a group of interfaces, usually belonging to different nodes. A packet sent to an anycast address is delivered to just one of the member interfaces, typically the “nearest” according to the routing protocol’s choice of distance. Anycast addresses cannot be identified easily: they have the structure of normal unicast addresses, and differ only by being injected into the routing protocol at multiple points in the network.


 Multicast addresses
A multicast address is also assigned to a set of interfaces that typically belong to different nodes. A packet that is sent to a multicast address is delivered to all interfaces identified by that address. Multicast addresses begin with an octet of one (1) bits, i.e., they have prefix FF00::/8. The four least-significant bits of the second address octet identify the address scope, i.e. the span over which the multicast address is propagated.

Commonly implemented scopes are node-local (0x1), link-local (0x2), site-local (0x5), organization-local (0x8), and global (0xE). The least-significant 112 bits of a multicast address form the multicast group identifier. Only the low-order 32 bits of the group ID are commonly used, because of traditional methods of forming 32-bit identifiers from Ethernet addresses. Defined group IDs are 0x1 for all-nodes multicast addressing and 0x2 for all-routers multicast addressing.

Another group of multicast addresses are solicited-node multicast addresses which are formed with the prefix FF02::1:FF00:0/104, and where the rest of the group ID (least significant 24 bits) is filled from the interface's unicast or anycast address. These addresses allow link-layer address resolution via Neighbor Discovery Protocol (NDP) on the link without disturbing all nodes on the local network.

Link-local addresses and zone indices
All interfaces have an associated link-local address that is only guaranteed to be unique on the attached link. Link local addresses are created in the fe80::/10 address space. Because link-local addresses have a common prefix, normal routing procedures cannot be used to choose the outgoing interface when sending packets to a link-local destination. A special identifier, known as a zone index, is needed to provide the additional information; in the case of link-local addresses, zone indices correspond to interface identifiers.

When an address is written textually, the zone index is appended to the address, separated by a percent sign "%"and the actual syntax of zone indices depends on the operating system. Zone index notations cause syntax conflicts when used in Uniform Resource Identifiers (URI), as the '%' character also designates percent encoding.

Relatively few IPv6-capable applications understand address scope syntax at the user level, thus rendering link-local addressing inappropriate for many user applications. However, link-local addresses are not intended for most of such application usage and their primary benefit is in low-level network management functions, for example for logging into a router that for some reason has become unreachable.

I have attempted to tone down the technical details of this paper, but hopefully, we have been able to touch on enough technical introductions to stimulate the interest of people for would want to dig a bit deeper, while still giving the average reader a general introduction to this interesting topic!
 

 

Quality of Service in VoIP

 

IPv6 brings quality of service that is required for several new services and applications such as Internet Telephony, video/audio etc. Whereas IPv4 is a best-effort service, IPv6 ensures QoS. This is one of the major advantages that IPv6 has over IPv4, and for VoIP implementations, it is a very interesting subject.

 

Presently, Voice-over-internet Protocol (VoIP) is all the buzz- but it is more than just talk. VoIP technology has matured. Not only is it the latest hot new Internet application. Today, VoIP has emerged as a reliable technology that is commercially viable, competing (and winning) against traditional phone services in business and consumer-class markets.

n      The most common problems on a VoIP network include:

1.        Latency

2.        Jitter

3.        Bandwidth

4.        Packet loss

5.        Reliability

6.        Security

Latency (or delay) is the time that it takes a packet to make its way through a network end-to-end. In telephony terms, latency is the measure of time it takes the talker's voice to reach the listener's ear. Large latency values do not necessarily degrade the sound quality of a phone call, but the result can be a lack of synchronization between the speakers, such that there are hesitations in the speaker' interactions.

Jitter is the measure of time between when a packet is expected to arrive to when it actually arrives. In other words, with a constant packet transmission rate of every 20ms, every packet would be expected to arrive at the destination exactly every 20 ms. This situation is not always the case. The greatest culprit of jitter is queuing variations caused by dynamic changes in network traffic loads. Another cause is packets that might sometimes take a different equal-cost link that is not physically (or electrically) the same length as the other links.

Packet loss occurs for many reasons, and in some cases, is unavoidable. Often the amount of traffic a network is going to transport is underestimated. During network congestion, routers and switches can overflow their queue buffers and be forced to discard packets. Packet loss for non-real-time applications, such as Web browsers and file transfers, is undesirable, but not critical. The protocols used by non-real-time applications, usually TCP, are tolerant to some amount of packet loss because of their retransmission capabilities.

 

As a real-time application, VoIP- also known as packet voice, packet telephony, or IP telephony - places increased demands on the evolving Internet. VoIP users expect the Internet to deliver toll-quality voice with the same clarity as the traditional Public Switched Telephone Network (PSTN). To meet those expectations, the Internet connection must be more than merely reliable, it must be time-sensitive. Each and every voice packet must be delivered without significant delay and with consistent time intervals between packets.

Advanced Quality of Service (QoS) technology is the key to achieving voice quality that measure up to today’s high standards.

 

Generally, VoIP has three major layers :

n      Media Layer – In this layer, on the bearer platform, media is processed, including the media transport, QoS, and items such as tones and announcements. Two bearer platforms communicate with one another over media transport, and this is where TDM, Frame Relay, ATM, and MPLS apply.

n      Signalling Layer – Signal processing, signal conversion, resource management, and bearer control occur at this layer, which is the signalling platform. Signalling platforms talk with one another by using a signalling technique. This is where H.323, SIP, and other call control protocols apply.

n      Application Layer – This layer, referred to as the application platform, is  home to call intelligence, and is where service creation and execution as well as provisioning management occur. Application platforms talk to one another with application-specific inter-application protocols.

 

From a technical perspective good voice quality involves minimizing delays and interruptions (“blips”) in the communication stream. For voice communications over an IP network, packet delays and losses in the VoIP network must be reduced to the level of perceived voice quality that most user find acceptable, end-to-end packet delay must be reduce to a target of 120ms or less. In most IP networks some degree of packet loss may be inevitable. However, for a successful VoIP deployment, we must reduce the packet loss to well below 1%. For Fax-over-IP connections, the packet-loss target is especially critical.

Packet loss and packet delay, may accumulate at multiple locations within IP net. Every router, switch and transmission line is a potential culprit for harbouring these enemies of voice quality. Technology standard such as QoS in each and every node, thus enforcing QoS in mechanism throughout the network from end to end. (These standards will be discussed later in this paper). The reality today, however, is an Internet that does not differentiate between real-time (RT) and non-real time (NRT) packets. As a result, VoIP networks require alternate methods for ensuring voice quality.

 

In VoIP systems that traverse the Internet, the bottleneck typically occurs at the access link-the low bandwidth connection between the high-speed Internet backbone (WAN) and the user network (LAN). Both networks typically run at 100Mbps or above. A typical access link may easily be run about 200 times slower than the LAN residing in the home or office (say, 512kbps for example). Congestion, queuing delay and queue over-flows (resulting in dropped packets) are most likely to occur on this link. Depending on access-link bandwidth, packet size (the number of packets arriving at once), queuing delay can be especially significant.

 

In a typical installation, a single access link serves both voice and data traffic, so special measures must be employed to ensure good voice quality. Consider the case of a 256 kbps access link from the Internet’s edge route to the user’s LAN. Suppose the Internet’s transmit queue contains five 1500-byte data packets roughly 270 ms to send those data packets over the 256 kbps link. When the voice data packets follows, it arrives with a delay longer than 120 ms (our target), resulting in degraded voice quality.

 

Addresses queuing delay by measuring voice packets receive priority treatment-in much the same way that separate queues at airport check-in counters ensure priority service to first-class customers. By creating separate queues for Real-Time and Non-real-Time traffic, we can assign higher priority to the queue to the RT queue and serve that traffic with higher priority.

 

Upstream and downstream, traffic present different problems and are best addressed by different Quality of Service (QoS) mechanisms. Upstream traffic flow from the (home or office) user to the Internet, while downstream traffic flows from the Internet to the user.

Implementing QoS in the upstream direction is relative easy. The router ensures that outbound voice packets get served before other the bottleneck) from becoming overloaded. The router also provides turning mechanisms for additional parameters like packet segmentation and overhead optimitization, so network administrators can further fine –tune the upstream transmission for optimum voice quality.

Implementing QoS in the downstream direction is more complex. Typically, customer-premise equipment (CPE) at user locations exerts no control over incoming traffic. For traffic flowing downstream from the Internet, the access router cannot control the volume or the sending users. In addition, local users sharing the LAN with you can initiate file transfers or download their email at their convenience. The severs handling these requests may be located anyplace. Since the downstream rate normally cannot be controlled, the ISP’s edge router commonly responds to overloading by discarding VoIP packets with the same probability as any other packet type. These factors or a combination of them may degrade voice quality to a degree that users find objectionable. Data traffic, on the other hand can be transmitted so the impact on the user experience is simpler slow service response.

 

When implementing QoS, to ensure good voice quality, VOIP network may employ a selecting of mechanism from a variety of standard communication protocols. Such mechanism may include :

·         Tag or label within the packet or frame

·         Traffic (QoS) classes

·         Traffic (QoS)conditioning or packet treatment

.

A key point to remember about any QoS related standard is the following: a standard is only as effective as its specific implementation in a specific network in the real world. For example, almost every router in the world claims to support TOS labelled IP packets. ”so why,” we want ask is the internet still a best effort network why as the internet not implemented QoS? The answer lies in the additional complexity of administering a network that delivers QoS.

 

STANDARD LAYER QoS LABEL DESCRIPTION
IEEE802.1PpQ                  Layer 2 (Ethernet)        3 bits in Ethernet Header                Extended Ethernet frame for VLAN and QoS  defines eight priority classes                                                                                                                             
MPLS            Layer 2 (ATM, FR or Shim label   not applicable  On IP)                               not applicable                             Label switching protocol for core routers. Can be used for traffic engineering, VPN and QoS differentiation.
TOS/Precedence Layer 3  (IP) 1 octet in IP Header Defines three traffic CHARACTERISTICS : low delay, high throughout, high reliability
DiffServ Layer 3 (IP) Same octet in IP Head as TOS Defines a 6 bit field for science classes and a number of Per Hop Behaviours (PHB) on how to treat packets

 

Administering a QoS network is much more complex than ensuring a reliable connectivity in a best effort network. This complexity is why networks supporting different service classes have not appeared until quite recently and only a limited scale and only from single operators. Considering the content of our discussion, one can begin to grasp the complexity of implementing QoS end to end across the boundaries of multiple networks.

 

 

Session Initiation Protocol

 

One of the touted advantages of IPv6 over IPv4 is the support of IPv6 for Quality of Service (QoS) in applications, as well as its use jumbograms, both advantages that bode well for Voice over Internet Packets (VoIP) because of its inherent need for differentiated service over packet networks like the Internet, and its abiding need for efficient signal processing.

 

Three major VoIP protocols are used in call signalling. The first standard applied to support interoperability between VoIP systems was ITU-T’s H.323. While H.323 has the advantage of being a matured standard, there are some negatives associated with it, including its complexity, its tuning for PC conferencing rather than telephony, its bad scaling properties, and its difficulty working over firewalls. A second protocol that came into play is Media Gateway Control Protocol (MGCP), which advocates a centralised control architecture. MGCP is fairly simple : it models current PSTN call control architecture but requires high- reliability and high-availability call servers. The third and final protocol is Session Initiation Protocol (SIP), which advocates a decentralised control architecture.

 

SIP is an application-layer signalling protocol for creating, modifying, and terminating sessions with one or more participants. SIP can establish sessions for features such as audio- or video-conferencing, interactive gaming, and call-forwarding to be deployed over IP networks, thus enabling service providers to integrate basic IP telephony services with Web, email, and chat services. The SIP protocol is a TCP/IP-based Application Layer protocol. SIP is designed to be independent of the underlying transport layer; it can run on Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Stream Control Transmission Protocol (SCTP). It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP), allowing for direct inspection by administrators.

 

SIP end-points negotiates the media parameters using Session Description Protocol (SDP). Session Description Protocol allows the SIP terminals or application to negotiate the type of media (audio, video, or data), transport protocol (Real-Time Protocol), and media encoding method.SIP is transaction based, which means it is a good match for what is called a stupid network paradigm, where it is relatively easy to build in scalability and reliability and there are built-in security mechanisms. In SIP, state is only stored in the end devises. There is no single point of failure (SPOF) in SIP, and networks designed this way scale well. The trade-off for the distributiveness in scalability is the higher message overhead that results from messages being sent end-to-end.

 

The aim of SIP is to provide the same functionality as the traditional PSTN, but with end-to-end design that makes SIP networks much more powerful and open to the implementation of new services. SIP was standardised under RFC 2543 but has been refined in RFC 3261, and is a peer-to-peer protocol in which end devises, known as user agents, initiate sessions. SIP is much faster, more scalable and easier to implement than H.323. SIP is designed in conformity with the Internet model, and its end-to-end signalling method means that all the logic is stored in end devises, except the routing of SIP messages. SIP servers operate in two modes : stateful and stateless. In a stateful mode the incoming requests received, responses sent and the outgoing requests made by the server are stored in memory. Stateful-mode servers are the local devices close to the user agents. In a stateless mode the incoming requests received, responses sent and the outgoing requests made by the server are not stored in memory. Stateless servers are backbone of the SIP infrastructure.

 

SIP support five facets of establishing and terminating multimedia communications, namely :

 

1.       User location – the determination of the end system to be used for communication;

2.       User availability – the determination of the willingness of the called party t6o engage in communication;

3.       User capabilities – the determination of the media and media parameters to be used;

4.       Session setup – the establishment of session parameters at both the called and the calling parties;

5.       Session management – the transfer and termination of sessions, modification of session parameters, and invocation of services.

 

SIP can be used with used with other IETF protocols to build a complete multimedia architecture; for example, it can be used with Real Time Protocol (RTP) for transporting real-time data and providing QoS feedback, with Real-Time Streaming Protocol (RTSP) for controlling delivery of streaming media, with MGCP for controlling gateways to the PSTN, or with Session description Protocol for describing multimedia sessions. Although SIP should be used in conjunction with other protocols to provide complete service to users, the basic functionality and operations of SIP do not  depend on any other protocol.

 

SIP works with both IPv4 and IPv6, and operates as follows for Internet telephony, one of its most common applications : callers and called parties are identified by SIP addresses (e.g. me@anyplace.com), which makes it easy to guess that SIP addresses are based on email addresses! When making a SIP call, a caller locates the appropriate server and sends a SIP request; the most common SIP operation is the invitation. Instead of directly reaching the intended called party, a SIP request may be redirected or may trigger a chain of new SIP requests by proxies. Users can register their locations with SIP servers; SIP addresses can be embedded in web pages and, therefore, can be integrated as part of powerful applications such as click-to-talk.

 

The purpose of SIP is to make communication possible, but that communication itself must be achieved by another means and possibly another protocol, and two protocols that are most often used along with SIP are RTP and SDP. RTP is used to carry the real-time multimedia data, including audio, video, and text. RTP makes it possible to encode and split the data into packets and transport the packets over the Internet. SDP is used to describe and encode the capabilities of session participants. This description is then used to negotiate the characteristics of the session so that all the devices can participate. For example, the description is used in negotiation of the codecs used to encode media so all the participants will be able to decode it and in negotiation of the transport protocol to be used.

 

SIP follows the client/server model. Components interacting in a SIP environment are called as User Agents (UA) There are two types of User Agents :

·         User Agent Client (UAC): User Agent Client actually generates the requests and sends them to the servers. The user agent is the end system component for the call.

·         User Agent Server (UAS): User Agent Servers receives the requests, processes them and generates responses. SIP servers are part of network

 

SIP works in concert with several other protocols and is only involved in the signaling portion of a communication session.SIP servers are part of network device that handles the signalling associated with multiple calls. SIP clients typically use TCP or UDP on port numbers 5060 and/or 5061 to connect to SIP servers and other SIP endpoints. Port 5060 is commonly used for non-encrypted signaling traffic whereas port 5061 is typically used for traffic encrypted with Transport Layer Security (TLS). SIP is primarily used in setting up and tearing down voice or video calls. It has also found applications in messaging applications, such as instant messaging, and event subscription and notification. They possess a predefined set of rules to handle the requests sent by clients.

 

Let us examine the servers used by SIP :

·         Proxy Server: It is used as a proxy to forward the client request. Actually when a request is generated, the exact address of the recipient is not known. In this case the client sends its request to a proxy server. The server on behalf of the client forwards the  requests to another proxy server or the recipient.

·         Redirect Server: A redirect server is used to redirects the request back to the client when a client needs to try a different route to get to the recipient. It is mostly used when a recipient has moved temporarily or permanently to a different location.

·         Registrar: Registrar is used to register the users their locations.

·         Location Server: Location server is used to store the addresses registered to a Registrar. Also provides information about a caller's possible locations to redirect and proxy servers

 

While SIP has the potential to realise the ultimate IP telephony dream of low cost, feature-rich headsets from third party phone vendors, it nevertheless has its strengths and weaknesses. SIP’s strengths include tremendous industry support and widespread development. SIP softphones, Ethernet phones, and application servers are available to integrate with IP clients. SIP facilitates application development, and minimal call state duration is maintained in the network. SIP is central to all the latest solutions for telephony, ranging from hosted services to proprietary platforms, from handsets to softphones to open-source software.

 

While IETF still has substantial work to do to replicate in SIP the hundreds of TDM PBX calling features, SIP includes many features, including presence, that enable capabilities that are not possible in the TDM world. SIP-enabled telephony networks can also implement many of the more advanced call processing features present in Signalling System 7 (SS7), though the two protocols themselves are very different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer protocol, thus it requires only a simple (and thus scalable) core network with intelligence distributed to the network edge, embedded in endpoints (terminating devices built in either hardware or software). SIP features are implemented in the communicating endpoints (i.e. at the edge of the network) contrary to traditional SS7 features, which are implemented in the network

 

SIP does not only work in unicast (one-to-one) mode, but is also capable of multicast (one-to-many) communication. [Tech Note : Presence is defined as the ability, willingness, desire, and capability of a user to communicate across media end devices and even time and space. Presence is the ability to see in real-time where someone is, how that person prefers to be reached, and even what the person is doing!]

 

 

 

 

The Challenges of Multihoming in IPv6

 

As this Internet dependence becomes more critical, prudent management suggests there be no single point of failure that can break all Internet connectivity. The term "multihoming" has come into vogue to describe

various means of enterprise-to-service provider connectivity that avoid a single point of failure. Multihoming also can describe connectivity between Internet Service Providers and "upstream" Network Service Providers (ISP’s).

 

Several terms have become overloaded to the point of confusion, including multihoming, virtual private networks, and load balancing. This document attempts to bring some order to the definition of multihoming. It partially overlaps definitions of virtual private networks. If we take the word "multihoming" in the broadest context, it implies there are multiple ways to reach a "home" destination. This "home" may be identified by a name, an IP address, or a combination of IP address and TCP/UDP port.

 

Many discussions of multihoming focus on the details of implementation, using such techniques as the Border Gateway Protocol (BGP). [BGP is a routing protocol, just like RIP, EIGRP, OSPF, or IS-IS. But unlike these interior routing (gateway) protocols (IGPs), BGP is an exterior gateway protocol (EGP), designed to exchange routing information between different organizations (or Autonomous Systems {AS}) in BGP-speak. Each AS needs its own AS number, along with its own range of IP addresses.] One implementation technique is not appropriate for all requirements. There are special issues in implementing solutions in the general Internet, because poor implementations can jeopardize the proper function of global routing or DNS. An incorrect BGP route advertisement injected into the global routing system is a problem whether it originates in an ISP or in an enterprise. [Tech note : Autonomous System (AS) is just that – autonomous! It is usually an ISP, and within the ISP, routers exchange information freely – all systems are trusted as they are under a single administration in a single domain.]

 

So what is “multihoming”? Multi-homing is a situation that describes a single computer host that makes use of several IP addresses associated with various connected networks. Within this scenario, the multihomed computer host is physically linked to a variety of data connections or ports. These connections or ports may all be associated with the same network or with a variety of different networks. Depending on the exact configuration, multi-homing may allow a computer host to function as an IP router. When a network is connected to more than one Internet service provider (ISP) - who may be a connectivity provider, transit provider, or upstream provider – the technique is referred to as multihoming.

 

The chief objective is to increase the quality and robustness of the Internet connection for the IP network. It is also possible to extend this concept to devices, especially when each of them has more than one interface, and each of the interfaces is attached to different networks. A key pitfall in multihoming is that two apparently independent links, from completely different ISPs may actually share a common

transmission line and/or edge router. This will form a single point of failure and considerably reduce the reliability benefits from multihoming. Thus it might help to chose your ISP’s more carefully by working nto know the sources of their connections as far into the external networks as possible!

 

Multihoming is the practice of linking of linking to multiple-access ISP’s over physically discrete lines to the Internet. Multihoming includes a wide range of networking techniques that allow organizations to connect to internal or external destinations through more than one path. There are many reasons for using multiple paths, and perhaps more than one instance of a type of networking equipment, but the most common reasons are fault tolerance and traffic engineering. When organizations connect to the Internet, or directly to other organizations using the Internet protocol model, the ever-increasing criticality of their applications means that they cannot tolerate a single point of failure (SPOF) that could isolate them from the outside world. Indeed, the same concerns are present in complex internal networks, especially between different locations of the same enterprise. Often, organizations, including Internet service providers and telephone companies, have multiple external communications, not just for fault tolerance, but for reasonable load distribution.

 

Basically, there are three major ways to multihome :

 

1.       Get two independent connections and configure your equipment with two IP addresses. When the IP address associated with one connection does not work anymore, the router or server has to switch over to the other. This approach has several limitations. Obviously, all ongoing communication sessions are disrupted by the change in IP address. In addition, this functionality is not always available in routers, servers, or other equipment. Finally, most applications (such as Web browsers) are not designed to try multiple IP addresses just in case one is down, so this approach is generally only workable if the multihomed network only has clients and not servers.

2.       Get two connections to the same ISP. This ISP will then make sure you can use the same IP addresses over both connections. How this is exactly configured on your side depends on your ISP and on what your requirements are. A setup where you use two routers, one for each connection, is of course more complex, but this way, you can survive a router going down. In many cases, the actual configuration will be very similar to the next option (BGP), but because you do not need a "real" AS (Autonomous Systems) number or an independent IP address range, it is much easier and cheaper to get this off the ground, and your ISP protects you against most configuration mistakes. Having two connections to the same ISP has one major downside: you still have to depend on a single ISP. Fortunately, ISP-wide network failures are very rare, but all ISPs have points of presence or connections to certain external networks that go down from time to time. Then there is the risk of your ISP going belly-up

3.       Get two connections to two ISPs, but use the same IP address range over both. This way, you are not only safe from physical problems with telco connections, but also from ISP-wide failures and nearly all types of configuration problems that affect just a single ISP: if one fails, you just use the other. This is done on a per-destination basis: your router can figure out which destinations are reachable over which ISP, and route packets accordingly. This works extremely well, but there is still a downside: all major routers throughout the Internet must know over which ISPs your IP addresses are reachable at all times. This means you have to participate in inter-domain routing using the Border Gateway Protocol (BGP).

 

Many discussions of multihoming focus on the details of implementation, using such techniques as the Border Gateway Protocol (BGP), multiple DNS entries for a server, etc. But as we shall see later, it is wise to look systematically at the requirements before selecting a means of resilient connectivity. One implementation technique is not appropriate for all requirements. There are special issues in implementing solutions in the general Internet, because poor implementations can jeopardize the proper function of global routing or DNS. An incorrect BGP route advertisement injected into the global routing system is a problem whether it originates in an ISP or in an enterprise.

 

Multihoming in IPv4 is mostly a settled matter, but that is not the case with IPv6. Multihoming in the next-generation IPv6 protocol is not yet standardized, as discussions about the various possible approaches to

multihoming are still unresolved, but there are signposts for the technical warrior who cares to tread this path! In most cases, most organisations, out of fear of disrupting their network with the attendant costs, have stuck to IPv4 multihoming for now, and work to build an IPv6 network native, from the ground up, and then multihome this on a site-by-site basis before going enterprise-wide. Be that as it may, our objective here is to look at the developments in multihoming in IPv6 and we shall now proceed to study the technical challenges attached to this enterprise.

 

IPv6 Multihoming : Some of the generic forms of architectural approaches towards smooth transition to IPv6 multi-homing include:

·         Routing: The IPv4 multi-homing approach may be extended to IPv6 as well, with transit ISPs specifying the local site's address prefix as a distinct routing entry. Provider Independent (PI) Address Space is offered in IPv6. However some people feel that the resultant increased routing table size is likely to be too high for current router hardware to handle efficiently. One possibility is that new hardware with higher memory can be produced at less cost and will be able to handle this.

·         Mobility: An IPv6-specific mobility approach to be devised New Protocol Element: A new element to be inserted in the protocol stack that manages a determined identity for the session. Modifying a Protocol Element: The transport or IP protocol stack element in the host may be suitably modified, to cope with dynamic changes to the forwarding locator. Modified Site-Exit Router: The site-exit router and local forwarding system can be suitably modified to allow various behaviours including source-based forwarding, site-exit hand-offs, and address rewriting by site-exit routers. (Source: RFC 4177 ftp://ftp.rfc-editor.org/in-notes/rfc4177.txt).

 

Multihoming in the IPv6 protocol is still in its infancy, the various approaches are still under consideration, and it will be some time to come for a completely standardized solution once all the issues are resolved. However, there are some solutions that a number of vendors are looking at, and possibly the most common method of IPv6 multihoming presently is Provider Independent Address Space (PI). (Tech note: Provider Independent Address Space (PI-addresses) are Internet Protocol addresses assigned by Regional Internet Registries directly to an end-user organization, without going through an Internet Service Provider. It offers the end-user the opportunity to change service providers without changing addresses, and in particular to use multiple service providers at once in a multihomed configuration, but creates problems for address aggregation as described in Classless Inter-Domain Routing (CIDR). Contrast this with Provider Aggregatable Address Space (PA-addresses), which are Internet Protocol addresses assigned by Regional Internet Registries directly to an Internet Service Provider which can be aggregated into a single route advertisement for improved internet routing efficiency. Unlike Provider Independent Address Space IP addresses, the end-user cannot take the numbers with them if they change ISPs.

This technique has the advantage of working like IPv4, supporting traffic balancing across multiple providers, and maintaining existing TCP and UDP sessions through cutovers. Critics say that the increased size of routing tables needed to handle multi-homing in this way will overwhelm current router hardware. Proponents say that new hardware will be able to handle the increase due to cheaper memory, which drops in price with Moore's law. Proponents also say this is the only viable solution right now, and the worse is better philosophy supports the idea that it is better to deploy an imperfect solution now than a perfect solution after it is too late.

 

As multihoming in the IPv6 protocol is still in its infancy, the various approaches are still under consideration, and it will be some time to come for a completely standardized solution once all the issues are resolved. Due to the growth of the BGP routing tables in the Internet, the way to get multihomed in IPv6 is required to allow route aggregation in order to preserve the scalability of the interdomain routing system. Things are still developing and a number of solutions, including shim6 are still being refined, and hopefully before we have a full-blown implementation of IPv6, the aspect of multihoming in IPv6 will be a settled matter.

 

 

The Challenge of NAT in IPv6

 

The Internet has grown larger than anyone ever imagined it could be! Although the exact size is unknown, the current estimate is that there are about 150 million hosts and over 500 million users. When IP addressing first came out, everyone thought there were plenty of addresses to cover any need; theoretically, you could have 4,294,967,296 unique addresses . However, the actual number of available addresses is smaller because of the way that the addresses are separated into Classes and the need to set aside some of the addresses for multicasting, testing or other specific uses. One of the most often-stated ‘justifications’ for IPv6 is the issue of IPv4 address exhaustion; with the unprecedented expansion of Internet usage in recent years - especially by population dense countries like India and China - the impending shortage of address space (availability) was recognized by 1992 as a serious limiting factor to the continued usage of the Internet run on IPv4.

With the explosion of the Internet and the increase in home and business networks, the number of available IP addresses is simply not enough, and this is one of the main reasons for the development of IPv6, which will give greater address space. IPv6 supports addresses that are four times the number of bits as IPv4 addresses (128 vs. 32). This works out to be 340,282,366,920,938,463,463,374,607,431,768,211,456.

Network Address Translation (NAT) was not the only option that was designed to address IP address shortage; another popular scheme is Classless InterDomain Routing (CIDR). A CIDR address is still a 32-bit address, but it is hierarchical rather than class-based. NAT was developed specifically to address IP address shortage in particular instances when the cost of extra IP address is an issue. NAT is therefore of particular interest in countries other than the United States where historically there have been fewer addresses allocated per capita; and also in small businesses and home offices.

 While NAT came into being in the 1990’sas a popular tool for alleviating the IPv4 address exhaustion problem, it eventually became an indispensable tool for most home and small-business networks.NAT is a Cisco version of Port Address Translation (PAT), and enables a LAN to use one set of IP address for internal traffic and a second set of address for external traffic. This allows a company to shield internal addresses from the public Internet. NAT is used by a device (firewall, router, or computer) that sits between an internal network and the rest of the world. There are TWO main types of NAT : dynamic and static. In static NAT, the public IP address is always the same, allowing an internal host, such as a Web server, to have an unregistered private IP address and still be reached over the Internet. In dynamic NAT, a private IP address is mapped to a public IP address drawn from a pool of registered public IP addresses. By keeping the internal configuration of the private network hidden, dynamic NAT helps conceal the network from outside users.

 NAT is sometimes confused with proxy servers but there are definite differences. NAT is transparent to the source and destination computers. Neither one realises that it is dealing with a third device. But a proxy server is not transparent. The source computer knows that it is making a request to the proxy server and must be configured to do so. The destination computer thinks that the proxy server IS the source computer and deals with it directly. Also, proxy servers usually work at Layer 4 (transport) of the OSI Reference Model, while NAT operates at Layer 3 (network). Working at a higher layer makes proxy servers slower than NAT devices in most cases.

 The usage of NAT also carries certain drawbacks: 

1.             Network Address Translation does not allow a true end-to-end connectivity that is required by some real time applications. A number of real-time applications require the creation of a logical tunnel to exchange the data packets quickly in real-time. It requires a fast and seamless connectivity devoid of any intermediaries such as a proxy server that tends to complicate and slow down the communications process. 

2.             NAT creates complications in the functioning of Tunnelling protocols. Any communication that is routed through a Proxy server tends to be comparatively slow and prone to disruptions. Certain critical applications offer no room for such inadequacies. Examples include telemedicine and teleconferencing. Such applications find the process of network address translation as a bottleneck in the communication network creating avoidable distortions in the end-to-end connectivity. 

3.             NAT acts as a redundant channel in the online communication over the Internet. The twin reasons for the widespread popularity and subsequent adoption of the network address translation process were a shortage of IPv4 address space and the security concerns. Both these issues have been fully addressed in the IPv6 protocol. As the IPv6 slowly replaces the IPv4 protocol, the network address translation process will become redundant and useless while consuming the scarce network resources for providing services that will be no longer required over the IPv6 networks.

 Because NAT store dynamic address translation state within the unit, NATs do introduce a single point of failure for its set of associated clients. More fundamentally, NAT breaks the end-to-end transparency architecture of the Internet, and this can cause NATs to provide a very restricted view of the Internet. address into a very large number of private addresses, so a large number of computers can share that single public address. The immediate benefit of NAT is that it allows a single internet connection with a single IP address to be shared. However, there's a hidden cost: NAT breaks protocols that require incoming connections and protocols that carry IP addresses in them.

 An example of this is VoIP: a VoIP application on a computer (a "softphone") or VoIP phone registers with a SIP server, and then the SIP server tells the application or phone when there's an incoming call. The packets that carry the actual conversation are then exchanged directly between the calling parties with no involvement from the server. But in order to connect, the server must be able to tell each end where to send the VoIP packets. This must be a real, public address, and not the private address the VoIP application thinks it has. And each end must be able to receive those incoming packets, which don't match a prior outgoing session in the NAT.

 There are of course ways to make this work, but they require the NAT to be aware of the applications and/or the applications to be aware of the NAT. NAT devices usually have "application layer gateways" (ALGs) for popular protocols that don't normally work through NAT. For instance, a SIP ALG will monitor the traffic between the VoIP application and the SIP server, rewrite the private addresses that it sees there into the NAT’s public address, and make sure the incoming packets from the remote VoIP application are delivered correctly. Alternatively, the application can use protocols such as the uPnP Internet Gateway protocol or the NAT Port Mapping Protocol (NAT-PMP) to contact the NAT device to obtain the public address and ask the NAT to forward certain incoming packets.

 One of the promises of IPv6 is that the almost infinite number of addresses and the better (but not perfect) renumbering makes NAT unnecessary so it will once again be possible to deploy new applications without cumbersome workarounds or random failures that the widespread use of NAT imposes in today's IPv4. The Internet Engineering Task Force (IETF) has traditionally been highly critical of NAT, but despite that, it developed a technique called Network Address Translation - Protocol Translation (NAT-PT, RFC 2766) as a means for hosts that run IPv6 to communicate with hosts that run IPv4. So far, the usual way to deploy IPv6 has been to run IPv4 and IPv6 side-by-side.

 In addition to the list of practical issues, there is also the more fundamental question: do we want the IPv6 internet to inherit the same restrictions that are present in today's IPv4 internet? IPv6 was developed before NAT was in general use, and so far, the assumption has always been that NAT in IPv6 is unnecessary and undesirable. But the use of NAT-PT would pretty much import the IPv4 NAT issues into the IPv6 world. On the other hand, some people argue that the lack of NAT makes it harder to transition to IPv6 because NAT is an integral part of the way that networks are deployed. Taking away this tool would make network operators less willing to deploy the new protocol. However, this could just be "IPv4 thinking". For better or worse, IPv6 is different from IPv4, both as a natural result of the longer addresses and because the IETF used the opportunity to redesign IP to make some improvements unrelated to the address length. Unless ISPs decide to give IPv6 users only a single address like with IPv4, there is won't be any need to use NAT for the majority of all consumers. This implies that it's not a given that the ALGs and other workarounds that make NAT tolerable will be available in IPv6, even if some enterprise users want to stick to NAT when moving to IPv6.

 As businesses rely more and more on the Internet, having multiple points of connection to the Internet is becoming an integral part of business network strategy. Multiple connections, known as multi-homing, will be the topic of our next discussion.

 Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

 

WiMax, Wi-Fi : Technology and Implementation (Part Three)

To conclude our study of WiMax and WiFi and related technologies, we shall take a peek into the Bluetooth portion of the technologies. In the last two papers, we explored WiFi in part one, and WiMAX in part two. In this third and final part of the paper, we shall look at Personal Area Networks (PAN’s), as exemplified by the Bluetooth technology. In its most basic form, Bluetooth is a short-haul wireless protocol that is used to communicate from one device to another in a small area usually less than 30 feet.

Bluetooth started as a “wire-replacement” protocol for operation at short distances. A typical example is the connection of a phone to a PC, which, in turn, uses the phone as a modem. The technology operates in the unlicensed 2.4-GHz ISM band. The standard uses Frequency Hopping Spread Spectrum (FHSS) technology. There are 79 hops in BT displaced by 1 MHz, starting at 2.402 GHz and ending at 2.480 GHz. Bluetooth communicates at one million bits per second connection between two devices with a 768K data channel.

Bluetooth is a standard communications protocol for wireless personal area networks (PANs). It acts as a media between electronic devices such as mobile phones, laptops, PCs, printers, digital cameras, and video game consoles to connect and exchange information. It allows these devices to communicate with each other on a secure connection through an unlicensed short-range radio frequency, when they are in range. It simplifies the process of communication by easy discovery and setup of services between devices. It automatically advertises all of its services so that it is easy for the device to select the required service.

The name Bluetooth was coined after Harald Blaatand (Bluetooth), the king of Denmark and Norway in 10th century. He was responsible for the unification of warring tribes and today's Bluetooth is responsible for the unification of different technologies. Ericsson developed the Bluetooth technology that was later formalized by the Bluetooth Special Interest Group (SIG). The SIG was formed by a group of electronics manufacturers like Ericsson, IBM, Intel, Nokia, and Toshiba in 1998 and today it has more than 7500 companies worldwide. It is responsible for the research and development in the field of Bluetooth technology. The Bluetooth specifications are defined and licensed by the SIG. The operating range of Bluetooth depends on the device class.

BT ranges can vary from a low-power range of 1 meter (1 mW) for Class 3 devices, 10 meters (2.5 mW) for Class 2 devices, to 100 meters (100 mW) for Class 1 devices. BT Version 1.2 offers a data rate of 1 Mbps, and BT Version 2.0 with Enhanced Data Rate (EDR) supports a data rate of 3 Mbps. BT Version 1.1 was ratified as the IEEE Standard 802.15.1 in 2002.


Bluetooth belongs to a category of Short-Range Wireless (SRW) technologies originally intended to replace the cables connecting portable and fixed electronic devices. It is typically used in mobile phones, cordless handsets, and hands-free headsets (though it is not limited to these applications). The specifications detail operation in three different power classes—for distances of 100 meters (long range), 10 meters (ordinary range), and 10 cm (short range).
Bluetooth operates in the unlicensed ISM band at 2.4 GHz (similar to 802.11 b/g wireless), but it is most efficient at short distances and in noisy frequency environments. It uses FHSS technology—that is, it avoids interference from other signals by hopping to a new frequency after transmitting and receiving a packet. Specifically, 79 hops are displaced by 1 MHz, starting at 2.402 GHz and finishing at 2.480 GHz.


Bluetooth can operate in both point-to-point and logical point-to-multipoint modes. Devices using the same BT channel are part of a piconet that includes one master and one or more slaves. The master BT address determines the frequency hopping sequence of the slaves. The channel is also divided into time slots, each 625 microseconds in duration. The master starts its transmission in even-numbered time slots, whereas the slave starts its transmission in odd-numbered slots.

BT specifies two types of links, a Synchronous Connection-Oriented (SCO) link and an Asynchronous Connectionless Link (ACL). The SCO link is a symmetric point-to-point link between a master and a single slave in the piconet, whereas the ACL link is a point-to-multi¬point link between the master and all the slaves participating in the piconet. Only a single ACL link can exist in the piconet, as compared to several individual SCO links.

A piconet is an ad-hoc network that uses a single master Bluetooth device to communicate with up to seven active devices. It can further connect up to 255 inactive devices that can be made active whenever required. In this setup, the devices will switch roles and the slave device can become the master device at any time. The switching from one device to another occurs in round-robin fashion.

There are many flavours of the Bluetooth technology, and it has grown both in technology and adoption since its version 1.

Bluetooth 1.0 and 1.0B
• Mandatory Bluetooth device address (BD_ADDR) for transmission
• Products interoperable problems
• Anonymity rendering was not possible at protocol level
Bluetooth 1.1
• Defined as IEEE Standard 802.15.1-2002
• Fixed 1.0B specification errors
• Used Signal Strength Indicator (RSSI)
• Supported non-encrypted channels
Bluetooth 1.2
• Defined as IEEE Standard 802.15.1-2005
• Included backward-compatible feature
• Provided better Connection and Discovery
• Offered higher transmission speeds
• Extended Synchronous Connections (eSCO) to improve voice quality
• Supported Host Controller Interface (HCI) for three-wire UART
• Implemented adaptive frequency-hopping spread spectrum (AFH) to improve resistance to radio
• frequency interference
Bluetooth 2.0
• Included backward-compatible feature
• Introduced an EDR of 3.0 Mbits
• Provided 3-10 times faster transmission speed
• Reduced duty cycle ensured low power consumption
• Simplified multi-link scenarios
Bluetooth 2.1
• Included fully backward-compatible feature
• Extended inquiry response
• Implemented Sniff sub-rating to reduce the power consumption for devices in sniff low-power mode
• Increased battery life of mouse and keyboards by a factor of 3 to 10
• Stronger encryption through Encryption Pause Resume feature
• Better secured Simple Pairing
• Enabled automatic secure Bluetooth connections for NFC radio interfaces
Bluetooth 3.0
• Code-named as Seattle with version number as TBD (To Be Determined)
• Best suitable to adopt ultra-wideband (UWB) radio technology
• Offered high-speed/high-data-rate options
• Enabled high-quality video and audio applications for portable devices

Bluetooth “versus” Wi-Fi A few years ago, some marketing literature tried to emphasize BT and Wi-Fi as competing technologies. Though both operate in the ISM spectrum, they were invented for different reasons. Whereas Wi-Fi was often seen as a “wireless Ethernet,” BT was initially seen purely as a cable- or wire-replacement technology. Uses such as dialup networking and wireless headsets fit right into this usage model. Recently, the discussion has focused more on coexistence instead of competition because they serve primarily different purposes. There are still some concerns related to their coexistence because they operate over the same 2.4-GHz ISM band.

To recapitulate, the Bluetooth physical layer uses FHSS with a 1-MHz-wide channel at 1600 hops/second (that is, 625 microseconds in every frequency channel). Bluetooth uses 79 different channels. Standard 802.11b/g uses Direct sequence Spread Spectrum (DSSS) with 20-MHz-wide channels - it can use any of the 11 20-MHz-wide channels across the allocated 83.5 MHz of the 2.4-GHz frequency band. Interference can occur either when the Wi-Fi receiver senses a BT signal at the same time that a Wi-Fi signal is being sent to it (this happens when the BT signal is within the 22-MHz-wide Wi-Fi channel) or when the BT receiver senses a Wi-Fi signal.

BT 1.2 has made some enhancements to enable coexistence, including Adaptive Frequency Hopping (AFH) and optimizations such as Extended SCO channels for voice transmission within BT. With AFH, a BT device can indicate to the other devices in its piconet about the noisy channels to avoid. Wi-Fi optimization includes techniques such as dynamic channel selection to skip those channels that BT transmitters are using. Access points skip these channels by determining which channels to operate over based on the signal strength of the interferers in the band. Adaptive fragmentation is another technique that is often used to aid optimization. Here, the level of fragmentation of the data packets is increased or reduced in the presence of interference. For example, in a noisy environment, the size of the fragment can be reduced to reduce the probability of interference. Another way to implement coexistence is through intelligent transmit power control. If the two communicating (802.11 or Wi-Fi) devices are close to each other, they can reduce the transmit power, thus lowering the probability of interference with other transmitters.

Sometimes, something that can seem like a good idea at the time can have unforeseen consequences. Network Address Translation (NAT) is a perfect example. With expect adoption of IPv6 and the wide implementation of NAT, especially in Third World countries like Nigeria, there are major challenges, and this will be the subject of our focus in our next discussion.

Till then, whatever you are routing, be it voice or data or video, may it be successful!
 

 

Wi-Fi : Technology and Implementation (Part II)

 

In the last paper, we took an introductory look at the 802 standards and the basic implementation of the standards, especially the all-important MAC and LLC aspects of the implementation, and we went further to look at WLAN in detail, including aspects of its implementation in IPv6. This paper will take a closer look at WiMAX and its implementation with a special focus on those aspects of the technology that is particularly emphasised by IPv6 nodes.

 

WiMAX : WiMAX stands for Worldwide Interoperability for Microwave Access and it is a Broadband Wireless access (BWA) solution that is based on standards recommendations from both IEEE 802.16 group and the European Telecommunications Standards Institute (ETSI). It is an open, worldwide broadband telecommunications standard for both fixed and mobile deployments. Its purpose is to ensure the delivery of wireless data at multi-megabit rates over long distances in multiple ways. WiMAX allows connecting to internet without using physical elements such as router, hub, or switch. It operates at higher speeds, over greater distances, and for a greater number of people compared to the services of WiFi.

 

Two standards exist for WiMAX—802.16d-2004 for fixed access, and 802.16e-2005 for mobile stations[9]. The WiMAX forum certifies systems for compatibility under these two standards and also defines network architecture for implementing WiMAX-based networks. WiMAX is based on interoperable implementations of IEEE 802.16 wireless networks standard. Latest Mobile WiMAX is based on IEEE 802.16e-2005 which is an amendment of IEEE 802.16-2004. IEEE Std 802.16-2004 which replaced IEEE Standards 802.16-2001, 802.16-2002, and 802.16-2003 addressed only fixed systems. But each of these updates added various functionalities and expanded the reach of the standard.

·         IEEE 802.16 (First Version) addressed the line of sight (LOS) access in spectrum ranges between 10 GHz and 66 GHz.

·         IEEE 802.16a specification covered bands in the ranges between 2GHz and 10 GHz.

·         IEEE 802.16c added support for spectrum ranges both licensed and unlicensed from 2GHz to 10 GHz. Improved Quality of Service (QOS) and support for HiperMAN European standard are the highlights of this specification.

·         IEEE 802.16d supported OFDM version with 256 sub-carriers.

·         IEEE 802.16e-2005 used Scalable Orthogonal Frequency-Division Multiple Access (SOFDMA). It also used Multiple Input Multiple Output Communications (MIMO) to support multiple antennas.

 

 

WiMAX can be classified as a last-mile access technology similar to DSL, with a typical range of 3 to 10 kilometres and speeds of up to 5 Mbps per user with non-line of sight coverage. WiMAX access networks can operate over licensed or unlicensed spectra in various regions or countries—though licensed spectrum implementations are more common. WiMAX operation is defined over frequencies be­tween 2 and 66 GHz, parts of which may be unlicensed spectrum deployments in some countries. The lower frequencies can operate over longer ranges and penetrate obstacles, so initial network roll­outs are in this part of the spectrum—with 2.3-, 2. 5-, and 3.5-GHz frequency bands being common. Channel sizes vary from 3.5, 5, 7, and 10 MHz for 802.16d-2004 and 5, 8.75, and 10 MHz for 802.16e-2005. WiMAX networks are often used to backhaul data from Wi-Fi access points. In fact, they are often envisaged as replace­ments for the current implementation of metro Wi-Fi networks that use 802.11b/g for client access and 802.11a for backhaul to connect to the other parts of the network. WiMAX specifications address both Line of Sight (LOS) and Non Line of Sight (NLOS) topologies. In situation where LOS is possible, WiMAX cell coverage can be up to 50 km; where NLOS is the operative topology, it may trade down to 8 km.

 

The 802.6d-2004 standard uses Orthogonal Frequency Division Multiplexing (OFDM) similar to 802.16a and 802.16g, whereas 802.16e-2005 uses a technology called Scalable Orthogonal Frequency Division Multiplexed Access (S-OFDMA). This technology is more suited to mobile systems because it uses subcarriers that enable the mobile nodes to concentrate the power on the subcarriers with the best propagation characteristics (because a mobile environment has more dynamic variables). Likewise, the 802.16e radio and signal processing is more complex. Unlike 802.11, which supports only Time-Division Duplexing (TDD)—where transmit and receive functions occur on the same channel but at different times), 802.16 offers TDD, Frequency-Division Duplex-ing (FDD) (transmit and receive on different frequencies, which could also be at different times). Another innovation in WiMAX is similar to the scheme in Code Division Multiple Access (CDMA)—subscriber stations are able to adjust their power based on the distance from the base station, unlike the case of client stations in an 802.11 network.

 

WiMAX base stations use a scheduling algorithm for medium ac­cess by the subscriber stations. This access is through an access slot that can be enlarged or contracted (to more or fewer slots) that is assigned to the subscriber stations. Quality-of-Service (QoS) param­eters can be controlled through balance of the time-slot assignments among the base stations. The base-station scheduling types can be unsolicited grant service, real-time polling service, non-real time poll­ing service, and best effort. Depending upon the time of traffic and service requested, one of these scheduling types can be used.

 

The WiMAX network architecture is specified through functional entities (see Figure below), so you can combine more than one functional entity to reside on a network element. The Mobile Station (MS) connects the Access Service Network (ASN) through the R1 inter­face—which is based on 802.16d/e. The ASN is composed of one or more base stations (BSs) with one or more ASN gateways to connect to other ASNs and to the Connectivity Service Network (CSN). The CSN provides IP connectivity for WiMAX subscribers and performs functions such as Authentication, Authorization, and Accounting (AAA), ASN-CSN tunnelling, inter-CSN tunnelling for roaming stations, and so on. A critical tenet of the WiMAX Forum network architecture is that the CSN must be independent of the protocols related to the radio protocols of 802.16.

 

 

Courtesy : WiMAX Forum

 

So, how does WiMAX scale in IPv6? Basically, IPv6 has been built with a focus on the needs of the next generation internet. With mobile support features and security, IPv6 has better compatibility with WiMAX. However, deploying the new generation ”Internet Protocol” (IPv6) over 802.16-based wireless networks is facing an important challenge as the IEEE 802.16 standard is failing to support IPv6 functionalities. In fact, unlike the other 802 standards, the IEEE 802.16 is based on a point-to-multipoint

(PMP) communication model, where no direct connection (at the MAC layer) is possible between two stations, all communications pass through the Base Station. As a result, the 802.16 standard is unable to handle any form of IP multicast (group) communication, and hence it cannot sustain the new functionality of IPv6, namely auto-configuration mechanism, which particularly relies on IP multicast communication.

 

With this in mind, many organizations have initiated work to build a system that focuses on linking the Layer 3 technology of IPv6 with the Layer 2 technology of IEEE 802.16.

·         The IETF has initiated Working Group on "IPv6 over IEEE 802.16(e) Networks" to maintain IP connectivity over Mobile WiMAX networks.

·         The WiMAX Forum has formed an IPv6 Sub team to work on mobile support such as Cellular Mobile IPv6 (CMIPv6).

·         The Ipv6 Forum together with the WiMAX Forum published a paper Vision 2010 focusing on IPv6 over WiMAX.

 

Presently, the 16ng Group is still proffering solutions to deploy IPv6 over 802.16 by looking at different aspects of the technology.

 

Next week, we will continue our study of Wi-Fi, Bluetooth and WiMAX  - Technology and Implementation by looking at the Bluetooth aspect of the technology.

 

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

 

 

WiMax, Wi-Fi : Technology and Implementation (Part 1)

 

While the enabling technology behind wireless networking technologies are fairly well understood and appreciated, there is still a great amount of confusion as regards where each component of the technology fits in the wireless realm, and the limit of the application of such technologies in practical terms. Be that as it may, it is generally recognised and acknowledged that these wireless technologies are of immense benefit in many areas of work and play applications. Some basic, non-technical definitions will help us introduce the technologies and locate them properly. 

While conventional local area networks use copper wire, coaxial, and fibre-optic cable as common carrier medium, WLAN make use of radio frequency. The area covered by WLAN is usually restricted, due to the low allowable power radiation. A wireless LAN (or WLAN) is a wireless local area network that uses radio waves as its carrier: the last link with the users is wireless, to give a network connection to all users in the surrounding area. Areas may range from a single room to an entire campus. The backbone network usually uses cables, with one or more wireless access points connecting the wireless users to the wired network. All WLAN standards are based on the IEEE specification, and they are designated according to their operating frequency, range, and speed, and performance. All these factors need to be understood in making an informed choice in buying WLAN products and services. 

Some Background : The 802 standards are a set of standards for LAN and MAN data communications developed through the IEEE’s 802 Project. The standards also include an overview of recommended networking architectures. The 802 standards follow a unique numbering convention; a number followed by a capital letter denotes a standalone standard, while a number followed by a lower case letter signifies either a supplement to a standard, or a part of a multiple-number standard 9e.g. 802.1 & 802.3. 

The 802 standards segment the data link layer into two sub-layers :

·         A Media Access Control (MAC) layer that includes specific methods for gaining access to the LAN. These methods – such as Ethernet’s random access method and Token Ring’s token passing procedure – are in the 802.3, 802.5, and 802.6 standards.

·         A Logical Link Control (LLC) Layer, described in the 802 standard, which provides for connection establishment, data transfer, and connection termination services. LLC specifies three types of communication links :

o       An Unacknowledged Connectionless Link, where the sending and receiving devices do not set up a connection before sending. Instead, messages are on a “best effort” basis, with no provision for error detection, error recovery, or message sequencing. This type of link is best suited for applications where the higher layer protocols can provide the error connection and functions, or where the loss of broadcast message is not critical.

o       A Connection-Mode Link, where a connection between message source and destination is established prior to transmission. This type of link works best in applications such as file transfer, where large amounts of data are being transmitted at one time.

o       An Acknowledged Connectionless Link that, as the name denotes, provides for acknowledgement of messages without burdening the receiving devices with maintaining a connection. For this reason, it is most often used for applications where a central processor communicates with a large number of devices with limited processing capabilities.

 

 In general terms,

·         802.11 defines Wireless Local Area Networks (WLAN) and often known as Wi-Fi®

·         802.15 defines Wireless Personal Area Networks (WPAN), as depicted by the Bluetooth technology

·         802.16 defines WiMax (Worldwide Interoperability for Microwave Access) for Metropolitan Area networks

·         802.20 defines Wireless Wide Area Networks (WWAN).

 

WPANs operate in the range of a few feet, whereas WLANs operate in the range of a few hundred feet and WWANs beyond that, and in many cases in the range of a few miles. Let us take a closer look at the technologies. 

WLAN : There are currently three major WLAN standards, and both operate using radio frequency (RF). Colloquially, the standards are referred to as 802.11b, 802.11a, and 802.1g; together they are collectively called Wi-Fi® (Wireless Fidelity). 802.11b and 802.11g operate in the 2.4GHz frequency band, and 802.1a operate in the 5GHz frequency range; all these bands lie within the license-free spectrum range reserved for Industrial, Scientific, and Medical (ISM) applications. More recently, a high-speed 802-11 WLAN has been proposed – the 802.11n WLAN, which operates in both 2.4- and 5-GHz bands. 

The 2.4-GHz frequency band used for 802.11 is the band between 2.4 and 2.485 GHz for a total bandwidth of 85 MHz, with 3 separate non-overlapping 20-MHz channels. In the 5-GHz band, there are a total of 12 channels in 3 separate subbands—5.15 to 5.25 GHz (100 MHz), 5.25 to 5.35 GHz (100 MHz), and 5.725 to 5.825 GHz (100 MHz). 

The more common mode of operation in 802.1 1 is the infrastructure mode, where the stations communicate with other wireless stations and wired networks (Ethernet typically) through an access point. The other mode is the ad-hoc mode, where the stations can communicate directly with each other without the need for an access point; we will not discuss this mode in this article. The access point bridges traffic between wireless stations through a lookup of the destination address in the 802.11 frame. 

IEEE 802.11 is a standard protocol for port-based Network Access Control and it provides authentication to devices attached to a LAN port. It establishes a point-to-point connection or prevents access from that port if authentication fails. 


It is not only valuable for authenticating and controlling user traffic to a protected network, but also effective for dynamically varying encryption keys. It attaches the Extensible Authentication Protocol (EAP) to both wired and wireless LAN networks for allowing multiple authentication methods like token cards, one-time passwords, certificates, public key authentication etc.
 

Looking at 802.11 working in IPv6, we observe that IEEE 802.1x addresses IEEE 802.11 security issues like:

1.       User Identification & Strong authentication

2.       Dynamic key derivation

3.       Mutual authentication

4.       Per-packet authentication

5.       Dictionary attack precautions

 

IEEE 802.1X Authentication in IEEE 802.11 (WLAN with IPV6 Nodes) : IEEE 802.1X authentication occurs after 802.11 associations. Client and access point will have an Ethernet connection after association. All non-EAPOL traffic from client is filtered prior to

authentication. If authentication is successful, the access point removes the filter. 802.1X messages

are sent to destination MAC address. 

It is possible to create an 802.1x authentication environment in an IPv6 environment, based on RADIUS. Many vendors like Cisco, HP, and Funk have implemented RADIUS based Authentication, Authorization and Accounting (AAA) system for authenticating server to authenticate mobile station. The mobile station, access point, and RADIUS server are IPv6 nodes and use EAP for authentication method. The RADIUS server is used to process IEEE 802.1x mobile station access request to IEEE 802.11. 

The RADIUS server identifies the mobile station by Network Access Identifier (NAI) and authenticates its credentials. After authentication, it generates the encryption key for that mobile station and access point dynamically and distributes the same to the mobile station and the access point. Using this Wired Equivalent Protocol (WEP) key, encryption and decryption of messages take place. Integrity and confidentiality between mobile station and access point is accomplished through the WEP encryption and decryption between them. 

Benefits of IEEE 802.11 :

·         Leverages existing standards EAP and RADIUS

·         Enables interoperable user identification

·         Authentication based on Network Access Identifier and credentials

·         Centralized authentication, authorization, and accounting

·         Scalable through EAP types

·         Dynamic derivation of WEP unicast session keys

·         Renewal of WEP unicast session keys

·         Encryption of all data, using dynamic keys

·         Supports password authentication and One-Time Passwords (OTP)

 Next week, we will continue our study of Wi-Fi, Bluetooth and WiMAX  - Technology and Implementation by looking at WiMAX aspect of the technology.

 

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

 

IPv6 Transition Technologies

As part of the early IPv6 development work, the Internet Engineering Task Force (IETF) recognised that IPv4 to IPv6 transition will require transition technologies, and not just transition strategies, and the key elements of these transition technologies are described in Basic Transition mechanisms for Hosts and Routers, and published as RFC 4213. RFC 4213 specifies the core elements of the transition technologies, that being dual stack and configured tunnelling, while also defining a number of node typesbased on their protocol support, including legacy systems that only support IPv4, future systems that only support IPv6, and the dual, or IPv4/IPv6 node, which implements both IPv4 and IPv6.

The designers of IPv6 in the original “The Recommendation for the IP Next Generation Protocol” specification (RFC 1752) defined the following transition criteria:
• Existing IPv4 hosts can be upgraded at any time, independent of the upgrade of other hosts or routers.
• New hosts, using only IPv6, can be added at any time, without dependencies on other hosts or routing infrastructure.
• Existing IPv4 hosts, with IPv6 installed, can continue to use their IPv4 addresses and do not need additional addresses.
• Little preparation is required to either upgrade existing IPv4 nodes to IPv6 or deploy new IPv6 nodes.
The inherent lack of dependencies between IPv4 and IPv6 hosts, IPv4 routing infrastructure, and IPv6 routing infrastructure requires a number of mechanisms that allow seamless coexistence.

The ultimate goal of every transition strategy and technology is to move from IPv4 to IPv6 fully; however, a number of “halfway house” measures are recognised and provided for, including
• Using both IPv4 and IPv6
• IPv6 over IPv4 tunneling
• DNS infrastructure
While there are many transition technologies, we will look at the two most common ones in this paper, those being Dual Stack and Configured Tunnelling.

The Dual Stack (also known as dual IP layer) approach is considered the most straightforward approach to transition. A dual IP layer architecture contains both IPv4 and IPv6 Internet layers with a single implementation of Transport layer protocols such as TCP and UDP.

This method assumes that the host or router provides support for both IPv4 and IPv6 within its architecture, and thus has capability to receive and send both IPv4 and IPOv6 packets. Such host or router can thus operate in any of three ways : with only the IPv4 stack enabled; with only the IPv6 stack enabled; or with both the IPv4 and IPv6 stacks enabled. A dual stack therefore can be configured with both the IPv4 32-bit and IPv6 128-bit addresses, using DHCP to acquire its IPv4 addresses and mechanisms like stateless auto-configuration or DHCPv6 to obtain its IPv6 addresses. Today, IPv6 implementations are most likely dual stack as IPv6-only products will have limited communication capabilities due to the low level of IPv6 implementation at present.

In networking, tunnelling protocols enable new networking functions while still preserving the underlying network as it is. There may be several reasons why a network needs tunnelling, for example, to carry a payload over an incompatible delivery network, or to provide secure path through an untrusted network.
IPv6 over IPv4 tunnelling is the encapsulation of IPv6 packets with an IPv4 header so that IPv6 packets can be sent over an IPv4 infrastructure. Within the IPv4 header:
• The IPv4 Protocol field is set to 41 to indicate an encapsulated IPv6 packet.
• The Source and Destination fields are set to IPv4 addresses of the tunnel endpoints. The tunnel endpoints are either manually configured as part of the tunnel interface or are automatically derived from the next-hop address of the matching route for the destination and the tunneling interface.

IPv6 tunnelling enables IPv6 hosts and routers to connect with other IPv6 hosts and routers over existing IPv4 networks. The main purpose of I{P tunnelling is to deploy IPv6 as well as maintain compatibility with large existing base of IPv4 hosts and routers. IPv6 tunnelling encapsulates IPv6 datagram within IPv4 packets; the encapsulated packets travel over IPv4 Internet until they reach their destination host or router, where the IPv6-aware router or host de-encapsulates the IPv6 datagram, forwarding them as needed.

An automatic tunnel is a tunnel that does not require manual configuration. Tunnel endpoints for automatic tunnels are determined by the use of routes, next-hop addresses based on destination IPv6 addresses, and logical tunnel interfaces. There are a number of automatic tunnelling technologies that deserves mention here, including :

• ISATAP
Used for unicast communication across an IPv4 intranet and is enabled by default. ISATAP is an address assignment and host-to-host, host-to-router, and router-to-host automatic tunnelling technology that is used to provide unicast IPv6 connectivity between IPv6/IPv4 hosts across an IPv4 intranet. ISATAP is described in RFC 4214. ISATAP hosts do not require any manual configuration and can create ISATAP addresses using standard address auto configuration mechanisms.

• 6to4
RFC 3056 details this IPv6 transition technology, which allows IPv6 sites to communicate with each other over IPv4 networks without explicit tunnel setup. Used for unicast communication across the IPv4 Internet and is enabled by default. 6to4 is an address assignment and router-to-router, host-to-router, and router-to-host automatic tunnelling technology that is used to provide unicast IPv6 connectivity between IPv6 sites and hosts across the IPv4 Internet. 6to4 treats the entire IPv4 Internet as a single link. 6to4 is described in RFC 3056. The main advantage of 6to4 9not to be confused with 6 over 4!) is that it requires no end-node reconfiguration and minimal router configuration.

• Teredo
Used for unicast communication across the IPv4 Internet over network address translators (NATs). Teredo support is included and is disabled by default. Teredo, also known as IPv4 network address translator (NAT) traversal (NAT-T) for IPv6, provides address assignment and host-to-host automatic tunneling for unicast IPv6 connectivity across the IPv4 Internet, even when the IPv6/IPv4 hosts are located behind one or multiple IPv4 NATs. To traverse IPv4 NATs, IPv6 packets are sent as IPv4-based User Datagram Protocol (UDP) messages. The main benefit of Teredo is that it is a NAT traversal technology for IPv6 traffic. If the NAT supports UDP port translation, then the NAT supports Teredo.

Next week, we will look at Wi-Fi, Bluetooth and WiMAX - Technology and Implementation.

Till then, whatever you are routing, be it voice or data or video, may it be successful!



IPv4 Address Exhaustion and IPv6

One of the most often-stated ‘justifications’ for IPv6 is the issue of IPv4 address exhaustion; with the unprecedented expansion of Internet usage in recent years - especially by population dense countries like India and China - the impending shortage of address space (availability) was recognized by 1992 as a serious limiting factor to the continued usage of the Internet run on IPv4.

Every host on an IP network, such as a computer or networked printer, is assigned an IP address that is used to communicate with other hosts on the same network or globally. These addresses are normally expressed indotted decimal format (for example 66.230.200.110). Each octet, or part of the address, is a number from 0 to 255 and therefore there is a maximum of 4,294,967,296 addresses available for use. However, large blocks of addresses are reserved for special uses and are unavailable for public allocation.

There are insufficient publicly routable IPv4 addresses to provide a distinct address to every IPv4 device or service (which include desktop computers, mobile phones, embedded devices, and virtual hosts). This problem has been mitigated for some time using network address translation (NAT), whereby a single public Internet IP address can be shared by multiple internal local area network (LAN) hosts. Individual hosts behind NAT appear to be sending their data from the public IP address of the router used, and the router is able to keep track of which host originated the traffic inside the network and forwards replies from the Internet accordingly.

IPv4 is the current version of the Internet Protocol, the backbone of The Transmission Control Protocol (TCP)/ IP networking. The Internet and other TCP/IP networks are providing today support for most distributed applications, such as file transfer, electronic mail, remote access using TELNET, and the constantly growing World Wide Web.

One of the main benefits of Internet Protocol version 6 (IPv6) over previously used Internet Protocol version 4 (IPv4) is the large address-space that contains (addressing) information to route packets for the next generation Internet. An escalating demand for IP addresses acted as the driving force behind the development of the large address space offered by the IPv6. According to industry estimates, in the wireless domain, more than a billion mobile phones, Personal Digital Assistants (PDA), and other wireless devices will require Internet access, and each will need its own unique IP address.

The Internet is increasingly becoming a multimedia, application-rich environment, led by the huge popularity of the World Wide Web. Networks have branched out from simple e-mail and file transfer applications to complex client/server environments with multimedia enhancements. IPv4 is unable to adjust to the growing changes, suffering from limited address space, lack of needed functionality, quality of service and inadequate security features. The next generation of IP, called IPv6, has been standardized and will replace IPv4 in the near future. The new protocol will enable TCP/IP networks and applications to be compatible with the changing nature of the Internet. The extended address length offered by IPv6 eliminates the need to use techniques such as network address translation to avoid running out of the available address space. IPv6 contains addressing and control information to route packets for the next generation Internet.

The push to IPv6 was initially driven by a concern that the supply of available IP addresses would soon run out and that the addressing scheme had to be changed to allow for more addresses.

Till then, whatever you are routing, be it voice or data or video, may it be successful!


Routing in IPv6

Routing is the process of forwarding packets between connected network segments. For IPv6-based networks, routing is the part of IPv6 that provides forwarding capabilities between hosts that are located on separate segments within a larger IPv6-based network.

IPv6 is the mailroom in which IPv6 data sorting and delivery occur. Each incoming or outgoing packet is called an IPv6 packet. An IPv6 packet contains both the source address of the sending host and the destination address of the receiving host. Unlike link-layer addresses, IPv6 addresses in the IPv6 header typically remain the same as the packet travels across an IPv6 network. Routing is the primary function of IPv6. IPv6 packets are exchanged and processed on each host by using IPv6 at the Internet layer.
Above the IPv6 layer, transport services on the source host pass data in the form of TCP segments or UDP messages down to the IPv6 layer. The IPv6 layer creates IPv6 packets with source and destination address information that is used to route the data through the network. The IPv6 layer then passes packets down to the link layer, where IPv6 packets are converted into frames for transmission over network-specific media on a physical network. This process occurs in reverse order on the destination host.
IPv6 layer services on each sending host examine the destination address of each packet, compare this address to a locally maintained routing table, and then determine what additional forwarding is required. IPv6 routers are attached to two or more IPv6 network segments that are enabled to forward packets between them.

IPv6 uses the same types of routing protocols used in IPv4 networks, but with a few modifications to account for specific IPv6 requirements. Routing in IPv6 is almost identical to IPv4 routing under Classless Inter-Domain Routing (CIDR) except that the addresses are 128- bit IPv6 addresses instead of 32-bit IPv4 addresses. With very straightforward extensions, all of IPv4's routing algorithms (OSPF, RIP, IDRP, ISIS, etc.) can used to route IPv6.
IPv6 also includes simple routing extensions which support powerful new routing functionality. These capabilities include:
• Provider Selection (based on policy, performance, cost, etc.)
• Host Mobility (route to current location)
• Auto-Readdressing (route to new address)
The new routing functionality is obtained by creating sequences of IPv6 addresses using the IPv6 Routing option. The routing option is used by a IPv6 source to list one or more intermediate nodes (or topological group) to be "visited" on the way to a packet's destination. This function is very similar in function to IPv4's Loose Source and Record Route option.
In order to make address sequences a general function, IPv6 hosts are required in most cases to reverse routes in a packet it receives (if the packet was successfully authenticated using the IPv6 Authentication Header) containing address sequences in order to return the packet to its originator. This approach is taken to make IPv6 host implementations from the start support the handling and reversal of source routes. This is the key for allowing them to work with hosts which implement the new features such as provider selection or extended addresses. The address sequence facility of IPv6 can be used for provider selection, mobility, and readdressing. It is a simple but powerful capability.
In summary:
• IPv6 is similar to routing IPv4 with CIDR but with the flexibility that 128-bit addresses allow;
• There is only a minimal modification to dynamic routing protocols (OSPF, IDRP, RIP, IS-IS, BGP) in order to work with IPv6 address format;
• There is improved source routing options (routing header0, which is great for provider selection, mobility, etc. in IPv6.
Next week, we will examine IPSec as implemented in IPv6.
Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

IPv4 Address Exhaustion and IPv6

 As part of the early IPv6 development work, the Internet Engineering Task Force (IETF) recognised that IPv4 to IPv6 transition will require transition technologies, and not just transition strategies, and the key elements of these transition technologies are described in Basic Transition mechanisms for Hosts and Routers, and published as RFC 4213. RFC 4213 specifies the core elements of the transition technologies, that being dual stack and configured tunnelling, while also defining a number of node typesbased on their protocol support, including legacy systems that only support IPv4, future systems that only support IPv6, and the dual, or IPv4/IPv6 node, which implements both IPv4 and IPv6. 

The designers of IPv6 in the original “The Recommendation for the IP Next Generation Protocol” specification (RFC 1752) defined the following transition criteria:

·         Existing IPv4 hosts can be upgraded at any time, independent of the upgrade of other hosts or routers.

·         New hosts, using only IPv6, can be added at any time, without dependencies on other hosts or routing infrastructure.

·         Existing IPv4 hosts, with IPv6 installed, can continue to use their IPv4 addresses and do not need additional addresses.

·         Little preparation is required to either upgrade existing IPv4 nodes to IPv6 or deploy new IPv6 nodes.

The inherent lack of dependencies between IPv4 and IPv6 hosts, IPv4 routing infrastructure, and IPv6 routing infrastructure requires a number of mechanisms that allow seamless coexistence.

 The ultimate goal of every transition strategy and technology is to move from IPv4 to IPv6 fully; however, a number of “halfway house” measures are recognised and provided for, including

·         Using both IPv4 and IPv6

·         IPv6 over IPv4 tunneling

·         DNS infrastructure

While there are many transition technologies, we will look at the two most common ones in this paper, those being Dual Stack and Configured Tunnelling. 

The Dual Stack (also known as dual IP layer) approach is considered the most straightforward approach to transition. A dual IP layer architecture contains both IPv4 and IPv6 Internet layers with a single implementation of Transport layer protocols such as TCP and UDP.

Figure 1: A Dual IP Layer Architecture

 

This method assumes that the host or router provides support for both IPv4 and IPv6 within its architecture, and thus has capability to receive and send both IPv4 and IPOv6 packets. Such host or router can thus operate in any of three ways : with only the IPv4 stack enabled; with only the IPv6 stack enabled; or with both the IPv4 and IPv6 stacks enabled. A dual stack therefore can be configured with both the IPv4 32-bit and IPv6 128-bit addresses, using DHCP to acquire its IPv4 addresses and mechanisms like stateless auto-configuration or DHCPv6 to obtain its IPv6 addresses. Today, IPv6 implementations are most likely dual stack as IPv6-only products will have limited communication capabilities due to the low level of IPv6 implementation at present. 

In networking, tunnelling protocols enable new networking functions while still preserving the underlying network as it is. There may be several reasons why a network needs tunnelling, for example, to carry a payload over an incompatible delivery network, or to provide secure path through an untrusted network.

IPv6 over IPv4 tunnelling is the encapsulation of IPv6 packets with an IPv4 header so that IPv6 packets can be sent over an IPv4 infrastructure. Within the IPv4 header:

·         The IPv4 Protocol field is set to 41 to indicate an encapsulated IPv6 packet.

·         The Source and Destination fields are set to IPv4 addresses of the tunnel endpoints. The tunnel endpoints are either manually configured as part of the tunnel interface or are automatically derived from the next-hop address of the matching route for the destination and the tunneling interface. 

IPv6 tunnelling enables IPv6 hosts and routers to connect with other IPv6 hosts and routers over existing IPv4 networks. The main purpose of I{P tunnelling is to deploy IPv6 as well as maintain compatibility with large existing base of IPv4 hosts and routers. IPv6 tunnelling encapsulates IPv6 datagram within IPv4 packets; the encapsulated packets travel over IPv4 Internet until they reach their destination host or router, where the IPv6-aware router or host de-encapsulates the IPv6 datagram, forwarding them as needed. 

An automatic tunnel is a tunnel that does not require manual configuration. Tunnel endpoints for automatic tunnels are determined by the use of routes, next-hop addresses based on destination IPv6 addresses, and logical tunnel interfaces. There are a number of automatic tunnelling technologies that deserves mention here, including :

            ·         ISATAP

Used for unicast communication across an IPv4 intranet and is enabled by default. ISATAP is an address assignment and host-to-host, host-to-router, and router-to-host automatic tunnelling technology that is used to provide unicast IPv6 connectivity between IPv6/IPv4 hosts across an IPv4 intranet. ISATAP is described in RFC 4214. ISATAP hosts do not require any manual configuration and can create ISATAP addresses using standard address auto configuration mechanisms. 

·         6to4

RFC 3056 details this IPv6 transition technology, which allows IPv6 sites to communicate with each other over IPv4 networks without explicit tunnel setup. Used for unicast communication across the IPv4 Internet and is enabled by default. 6to4 is an address assignment and router-to-router, host-to-router, and router-to-host automatic tunnelling technology that is used to provide unicast IPv6 connectivity between IPv6 sites and hosts across the IPv4 Internet. 6to4 treats the entire IPv4 Internet as a single link. 6to4 is described in RFC 3056. The main advantage of 6to4 9not to be confused with 6 over 4!) is that it requires no end-node reconfiguration and minimal router configuration. 

·         Teredo

Used for unicast communication across the IPv4 Internet over network address translators (NATs). Teredo support is included and is disabled by default. Teredo, also known as IPv4 network address translator (NAT) traversal (NAT-T) for IPv6, provides address assignment and host-to-host automatic tunneling for unicast IPv6 connectivity across the IPv4 Internet, even when the IPv6/IPv4 hosts are located behind one or multiple IPv4 NATs. To traverse IPv4 NATs, IPv6 packets are sent as IPv4-based User Datagram Protocol (UDP) messages. The main benefit of Teredo is that it is a NAT traversal technology for IPv6 traffic. If the NAT supports UDP port translation, then the NAT supports Teredo. 

Next week, we will look at Wi-Fi, Bluetooth and WiMAX  - Technology and Implementation.  

Till then, whatever you are routing, be it voice or data or video, may it be successful!

IPv4 Address Exhaustion and IPv6  

One of the most often-stated ‘justifications’ for IPv6 is the issue of IPv4 address exhaustion; with the unprecedented expansion of Internet usage in recent years - especially by population dense countries like India and China - the impending shortage of address space (availability) was recognized by 1992 as a serious limiting factor to the continued usage of the Internet run on IPv4. 

Every host on an IP network, such as a computer or networked printer, is assigned an IP address that is used to communicate with other hosts on the same network or globally. These addresses are normally expressed indotted decimal format (for example 66.230.200.110). Each octet, or part of the address, is a number from 0 to 255 and therefore there is a maximum of 4,294,967,296 addresses available for use. However, large blocks of addresses are reserved for special uses and are unavailable for public allocation.

 

The Internet Protocol (IP) knows each logical host interface by a number, the IP address. On any given network, this number must be unique among all the host interfaces that communicate through this network. Users of the Internet are sometimes given a host name in addition to their numerical IP address by their Internet service provider. The IP addresses of users browsing the World Wide Web are used to enable communications with the server of the Web site. Also, it is usually in the header of email messages one sends. In fact, for all programs that utilize the TCP/IP protocol, the sender IP address and destination IP address are required in order to establish communications and send data.

 

Depending on one's Internet connection the IP address can be the same every time one connects (called a static IP address), or different every time one connects, (called a dynamic IP address). In order to use a dynamic IP address, there must exist a server which can provide the address. IP addresses are usually given out through a server service called DHCP or the Dynamic Host Configuration Protocol. If a static address is used, it must be manually programmed into parameters of the device's network interface.

 

Internet addresses are needed not only for unique enumeration of hosted interfaces, but also for routing purposes, therefore a high fraction of them are always unused or reserved. The unique nature of IP addresses makes it possible in many situations to track which computer — and by extension, which person — has sent a message or engaged in some other activity on the Internet. This information has been used by law enforcement authorities to identify criminal suspects; however dynamically-assigned IP addresses can make this difficult.

 

There are insufficient publicly routable IPv4 addresses to provide a distinct address to every IPv4 device or service (which include desktop computers, mobile phones, embedded devices, and virtual hosts). This problem has been mitigated for some time using network address translation (NAT), whereby a single public Internet IP address can be shared by multiple internal local area network (LAN) hosts. Individual hosts behind NAT appear to be sending their data from the public IP address of the router used, and the router is able to keep track of which host originated the traffic inside the network and forwards replies from the Internet accordingly.

 

IPv4 is the current version of the Internet Protocol, the backbone of The Transmission Control Protocol (TCP)/ IP networking. The Internet and other TCP/IP networks are providing today support for most distributed applications, such as file transfer, electronic mail, remote access using TELNET, and the constantly growing World Wide Web. The topic of IPv4 address exhaustion is one many organizations are watching closely, especially as the plans for transitioning to IPv6 are ramping up within the US, which is a laggard in adopting IPv6. The IPv4 address space is limited and there is general consensus that the IPv4 address space, managed by Internet Corporation for Assigned Names and Numbers (ICANN) and the Regional Internet Registries (RIR)s, is headed towards exhaustion.

 

One of the main benefits of Internet Protocol version 6 (IPv6) over previously used Internet Protocol version 4 (IPv4) is the large address-space that contains (addressing) information to route packets for the next generation Internet. An escalating demand for IP addresses acted as the driving force behind the development of the large address space offered by the IPv6. According to industry estimates, in the wireless domain, more than a billion mobile phones, Personal Digital Assistants (PDA), and other wireless devices will require Internet access, and each will need its own unique IP address.

 

The Internet is increasingly becoming a multimedia, application-rich environment, led by the huge popularity of the World Wide Web. Networks have branched out from simple e-mail and file transfer applications to complex client/server environments with multimedia enhancements. IPv4 is unable to adjust to the growing changes, suffering from limited address space, lack of needed functionality, quality of service and inadequate security features. The next generation of IP, called IPv6, has been standardized and will replace IPv4 in the near future. The new protocol will enable TCP/IP networks and applications to be compatible with the changing nature of the Internet. The extended address length offered by IPv6 eliminates the need to use techniques such as network address translation to avoid running out of the available address space. IPv6 contains addressing and control information to route packets for the next generation Internet.  

 

 Many people within the Internet community have analyzed the question of IPv4 address exhaustion and published their reports. The confusing aspect is that the estimates vary greatly based on the report. Some predict IPv4 address exhaustion within the next 12-24 months and others say it will not happen until 2013. The answer to the question of when the IPv4 address space will be exhausted revolves around the assumptions applied within the various models. If you believe the growth of the number of devices on the Internet will be stable/flat over the next decade, then the models showing the 2012 date are better estimates for your purpose. If you believe the number of devices connected to the Internet will continue to accelerate over the next decade, then the shorter timeframe and IPv4 address exhaustion as soon as 2008/2009 is the likely scenario. It is also important to note that this assumes that the rules governing the allocation of IPv4 addresses remain the same and no rush to acquire address space occurs.

 

According to a report by Cisco, “Network Address Translation (NAT) and CIDR did their jobs and bought the 10 years needed to get IPv6 standards and products developed. Now is the time to recognize the end to sustainable growth of the IPv4-based Internet has arrived and that it is time to move on. IPv6 is ready as the successor, so the gating issue is attitude. When CIOs make firm decisions to deploy IPv6, the process is fairly straightforward. Staff will need to be trained, management tools will need to be enhanced, routers and operating systems will need to be updated, and IPv6-enabled versions of applications will need to be deployed. All these steps will take time - in many cases multiple years.”

 

The consensus is that the recent consumption rates of IPv4 will not be sustainable from the central pool beyond this decade, so organizations would be wise to start the process of planning for an IPv6 deployment now. Those who delay may find that the IANA pool for IPv4 has run dry before they have completed their move to IPv6. Although that may not be a problem for most, organizations that need to acquire additional IPv4 space to continue growing during the transition could be out of luck. 

Next week, we will look at the transition technologies for moving from IPv4 to IPv6. 

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

NAT and IPv6

One of the most often-stated ‘justifications’ for IPv6 is the issue of IPv4 address exhaustion; with the unprecedented expansion of Internet usage in recent years - especially by population dense countries like India and China - the impending shortage of address space (availability) was recognized by 1992 as a serious limiting factor to the continued usage of the Internet run on IPv4.

IPv6 was originally called ‘Next Generation IP’ while the debate about the exhaustion of IPv4 addresses raged, and the most compelling reason for the design and adoption of IPv6 was the need to tackle this ‘address exhaustion issue’. During this interregnum, there was a solution that was adopted to reduce the rate of depletion of the IPv4 address stock, and this was the Network Address Translation (NAT) technique, and in our next lecture, we will look at NAT in the era of IPv6.

NAT was not the only option that was designed to address IP address shortage; another popular scheme is Classless InterDomain Routing (CIDR). A CIDR address is still a 32-bit address, but it is hierarchical rather than class-based. NAT was developed specifically to address IP address shortage in particular instances when the cost of extra IP address is an issue. NAT is therefore of particular interest in countries other than the United States where historically there have been fewer addresses allocated per capita; and also in small businesses and home offices.

So what really is NAT? NAT is an Internet standard that enables a Local Area Network (LAN) to use one set of IP addresses for internal traffic and a second set of addresses for external traffic. A NAT box located where the LAN meets the Internet makes all necessary IP address translations. There are TWO main types of NAT : dynamic and static. In static NAT, the public IP address is always the same, allowing an internal host, such as a Web server, to have an unregistered private IP address and still be reached over the Internet. In dynamic NAT, a private IP address is mapped to a public IP address drawn from a pool of registered public IP addresses. By keeping the internal configuration of the private network hidden, dynamic NAT helps conceal the network from outside users.

There are different types NAT, including:

Full Cone NAT - The term Full Cone NAT is also commonly referred known as one-to-one NAT. Full Cone NAT allows the mapping of various external (non local) address ports to the corresponding internal addresses ports in a symmetrical manner.

Restricted Cone NAT - This allows the local IP address and port number to be mapped to a particular external IP address and port number respectively. The relative mapping in the internal and external domains is not disturbed in the Restricted Cone network address translation.

Port restricted cone NAT - As the name suggests, the Port restricted cone NAT restricts the port numbers that are used for communication purposes over the Internet. All the external communication is directed to particular communication port except if there is a continuous communication with an application over a specific communications port.

Symmetric NAT - The communication process directed outwards is mapped to a unique external IP address along with a port number. This scheme imparts a logical symmetry to the process of external network access by the resources connected to the LAN.

In actual practice a pure NAT implementation is rarely used. Rather, a combination of the above types is implemented to achieve the desired network configuration.


NAT offers the following advantages to the network users:

a. The Network Address Translation process offers a simple yet effective solution to the nagging problem of limited address space offered by the contemporary network protocols such as the IPv4. The NAT process generates sufficient IP addresses to be used locally that are subsequently mapped to the real IP addresses for communications over the Internet.

b. A lack of complete bi-directional connectivity offered by NAT is desirable in certain situations as it restricts direct access to the LAN resources. Allocation of a static IP address makes the network resource a potential target for hackers. The presence of an intermediate Proxy server makes the situation tricky.

The usage of NAT also carries certain drawbacks:

1. Network Address Translation does not allow a true end-to-end connectivity that is required by some real time applications. A number of real-time applications require the creation of a logical tunnel to exchange the data packets quickly in real-time. It requires a fast and seamless connectivity devoid of any intermediaries such as a proxy server that tends to complicate and slow down the communications process.

2. NAT creates complications in the functioning of Tunnelling protocols. Any communication that is routed through a Proxy server tends to be comparatively slow and prone to disruptions. Certain critical applications offer no room for such inadequacies. Examples include telemedicine and teleconferencing. Such applications find the process of network address translation as a bottleneck in the communication network creating avoidable distortions in the end-to-end connectivity.

3. NAT acts as a redundant channel in the online communication over the Internet. The twin reasons for the widespread popularity and subsequent adoption of the network address translation process were a shortage of IPv4 address space and the security concerns. Both these issues have been fully addressed in the IPv6 protocol. As the IPv6 slowly replaces the IPv4 protocol, the network address translation process will become redundant and useless while consuming the scarce network resources for providing services that will be no longer required over the IPv6 networks.

That is why IPv6 is relevant! IPv6 provides a great solution to the address space crunch that was the underlying reason for the widespread adoption and usage of the Network Address Translation. A lack of address space resulted in a proportionately higher demand for the domain names in comparison to the availability of the same on the supply side.

This led to a squeeze in the availability of IP address thereby resulting in a situation where the IP address prices were shooting through the roof. The situation further made sense for the organizations to go for Network Address Translation technique as a cost-cutting tool.

In this way, the address space constraint in the IPv4 fuelled the popularity and widespread usage of the Network Address Translation process to overcome the situation. If an organization could not have enough IP addresses, then it could share them or create them over the local network through the use of a Proxy server and then map the internal IP addresses to the real IP addresses over the Internet thereby making the online communication process streamlined.

The Internet Protocol version 6 or IPv6 eliminates the need for Network Address Translation by offering a much larger address space that allows the network resources to have their own unique real IP address. In this way, IPv6 strikes at the very root of the problem for which Network Address Translation (NAT) provided a solution.

IPv6 offers a significantly larger address space that allows greater flexibility in assigning unique addresses over the Internet. IPv4 (the currently used standard protocol over the Internet that carries bulk of the network traffic), provides 32 bits of address space while the IPv6 offers 128 bits of address space that is easily able to support 2128 or 3.4W1038 or about 340 billion billion billion billion unique IP addresses. This allows a provision for permanent unique addresses to all the individuals and hardware connected to the Internet. Moreover, the extended address length eliminates the need to use techniques such as network address translation to avoid running out of the available addresses.

An escalating demand for IP addresses acted as the driving force behind the development of IPv6. According to industry estimates, in the wireless domain, more than a billion mobile phones, Personal Digital Assistants (PDA), and other wireless devices will require Internet access, and each will need its own unique IP address.

Moreover, billions of new, always-on Internet appliances for the home - ranging from the TV to the refrigerator - will also come online through the different technologies. Each of these devices will also require their own unique IP address. With the exponentially increasing demand for IP addresses, the world is fast outgrowing IPv4 and waiting to embrace IPv6.

In this way, the IPv6 protocol does away with the need to use Network Address Translation technique to make up for the address space crunch by creating local IP addresses over the LAN and mapping them to the real IP addresses used over the network.

IPv6 also offers superior security features thereby allaying the fears of allocating static IP addresses to the various network resources and throwing them open to attacks in the virtual space. The security issue is often used in the defence of the Network Address Translation process. However, the core principle of Internet is to offer an end-to-end connectivity to the different network resources.

This principle is violated by the widespread use of network address translation. It is like missing the woods for the trees. In this context, IPv6 provides a long-term solution to meet the address space crunch as well as the security concerns of the Internet users. For all practical purposes, IPv6 offers an almost endless supply of IP addresses that can be allocated to the exponentially increasing network devices that are being added to the Internet with each passing day. This large pool of IP addresses will provide an abundant supply of usable IP addresses and easily match the demand for the same. This equilibrium will bring the Internet address prices back to normal levels.

So why is IPv6 so important to IT practitioners these days? Because IP is the lifeblood of the network, without which no network can survive. The question of IP address exhaustion is a very serious matter, and is the main reason why IPv6 came into being.

In our next paper, we will look at the reality and myth of IP address exhaustion and the implications for the Internet system

Till then, whatever you are routing, be it voice or data or video, may it be successful!
 

 

 

Mobile IPv6  

Mobile computing is becoming increasingly important due to the rise in the number of portable computers and the desire to have continuous network connectivity to the Internet irrespective of the physical location of the node. The Internet infrastructure is built on top of a collection of protocols, called the TCP/IP protocol suite. Transmission Control Protocol (TCP) and Internet Protocol (IP) are the core protocols in this suite. IP requires the location of any host connected to the Internet to be uniquely identified by an assigned IP address. This raises one of the most important issues in mobility, because when a host moves to another physical location, it has to change its IP address. However, the higher level protocols require IP address of a host to be fixed for identifying connections.  

The Mobile Internet Protocol (Mobile IP) is an extension to the Internet Protocol proposed by the Internet Engineering Task Force (IETF) that addresses this issue. It enables mobile computers to stay connected to the Internet regardless of their location and without changing their IP address. More precisely, Mobile IP is a standard protocol that builds on the Internet Protocol by making mobility transparent to applications and higher level protocols like TCP. Mobile IP is thus the underlying technology for support of various mobile data and wireless networking applications. Mobile IP is intended for nomadic workers connecting to a wireline, rather than a wireless, network. 

The basic Mobile IP protocol has four distinct stages. These are:

1.       Agent Discovery: Agent Discovery consists of the following steps:

a.       Mobility agents (mobility agent is a node that provides some services to a mobile node) advertise their presence by periodically broadcasting Agent Advertisement messages. An Agent Advertisement message lists one or more care-of addresses and a flag indicating whether it is a home agent or a foreign agent.

b.       The mobile node (a mobile node is a node that changes its point of attachment to the Internet, i.e. a device that is capable of performing network roaming) receiving the Agent Advertisement message observes whether the message is from its own home agent and determines whether it is on the home network or a foreign network.

c.       If a mobile node does not wish to wait for the periodic advertisement, it can send out Agent Solicitation messages that will be responded to by a mobility agent.

2.       Registration: Registration consists of the following steps:

a.       If a mobile node discovers that it is on the home network, it operates without any mobility services.

b.       If the mobile node is on a new network, it registers with the foreign agent (a foreign agent is a router that functions as the mobile node’s point of attachment when it travels to the foreign network) by sending a Registration Request message which includes the permanent IP address of the mobile host and the IP address of its home agent (a home agent is a router on the home network which serves as the point for communication with the mobile node).

c.       The foreign agent in turn performs the registration process on behalf of the mobile host by sending a Registration Request containing the permanent IP address of the mobile node and the IP address of the foreign agent to the home agent.

d.       When the home agent receives the Registration Request, it updates the mobility binding by associating the Care-of address (CoA) of the mobile node with its home address. The Care- of Address is the termination point of the tunnel toward the mobile node when it is not in the home network. It is basically the IP address of the mobile node current point of attachment to the Internet)

e.       The home agent then sends an acknowledgement to the foreign agent.

f.        The foreign agent in turn updates its visitor list by inserting the entry for the mobile node and relays the reply to the mobile node.

3.       In Service: This stage can be subdivided into the following steps:

a.       When a correspondent node (a correspondent node is a node that communicates with the mobile node, and this node may be mobile or non-mobile) wants to communicate with the mobile node, it sends an IP packet addressed to the permanent IP address of the mobile node.

b.       The home agent intercepts this packet and consults the mobility binding table to find out if the mobile node is currently visiting any other network.

c.       The home agent finds out the mobile node's care-of address and constructs a new IP header that contains the mobile node's care-of address as the destination IP address. The original IP packet is put into the payload of this IP packet. It then sends the packet. This process of encapsulating one IP packet into the payload of another is known as IP-within-IP encapsulation, or tunnelling.

d.       When the encapsulated packet reaches the mobile node's current network, the foreign agent decapsulates the packet and finds out the mobile node's home address. It then consults the visitor list to see if it has an entry for that mobile node.

e.       If there is an entry for the mobile node on the visitor list, the foreign agent retrieves the corresponding media address and relays it to the mobile node.

f.        When the mobile node wants to send a message to a correspondent node, it forwards the packet to the foreign agent, which in turn relays the packet to the correspondent node using normal IP routing.

g.       The foreign agent continues serving the mobile node until the granted lifetime expires. If the mobile node wants to continue the service, it has to reissue the Registration Request.

Deregistration: If a mobile node wants to drop its care-of address, it has to deregister with its home agent. It achieves this by sending a Registration Request with the lifetime set to zero. There is no need for deregistering with the foreign agent as registration automatically expires when lifetime becomes zero. However if the mobile node visits a new network, the old foreign network does not know the new care-of address of the mobile node. Thus datagrams already forwarded by the home agent to the old foreign agent of the mobile node are lost.

RFC 3775 gives the details for Mobile IPv6, which is designed to be an evolutionary step from Mobile IPv4, while RFC 3776 talks about ’Using IPsec to protect Mobile IPv6 Signalling between Mobile nodes and Home agents’. Some advantages of Mobile IPv6 over Mobile IPv4 includes :

·         Route Optimization is built as a fundamental part of Mobile IPv6 unlike Mobile IPv4 where it is an optional set of extensions that may not be supported by all nodes.

·         Foreign Agents are not needed in Mobile IPv6. The enhanced features of IPv6 like Neighbour Discovery and Address Autoconfiguration enable mobile nodes to function in any location without the services of any special router in that location.

·         In Mobile IPv4, when a mobile node communicates with a correspondent node, it puts its home address as the source address of the packet. Thus “ingress filtering routers " used to filter out the packets as the source address of the packet is different from the network from which the packet originated. This problem is tackled in Mobile IPv6 by putting the care-of address as the source address and having a Home Address Destination option, allowing the use of the care-of address to be transparent over the IP layer. 

Mobile IPv6 is necessary because the mobile nodes in fixed IPv6 network cannot maintain the previously connected link (using the address assigned from the previously connected link) when changing location. To accomplish the need for mobility, connections to mobile IPv6 nodes are made (without user interaction) with a specific address that is always assigned to the mobile node, and through which the mobile node is always reachable. Mobile IPv6 is expected to be used in IP over WLAN, WiMAX or BWA.  

The goal of Mobile IPv6 is to provide seamless mobility for next generation mobile services and applications and across several access technologies such as WCDMA, WLAN etc. Additionally, Mobile IPv6 provides route optimization techniques to reduce handoff latencies. In short, the goals of mobile IPv6 is as follows:

·         Always on IP connectivity

·         Roaming between different L2 technologies WLAN, WiMAX, GSRP, fixed

·         Roaming between different (sub)networks. Huge WLAN deployments mostly use different L3 subnets

·         Application continuity (Session persistence)

·         Static IP Addresses for mobile nodes

·         Mobile devices may act as servers


Mobile IPv6 is a powerful enabler for the next generation of services such as peer-to-peer services, push services and Voice over IP (VoIP) which demand always-on global reachability and seamless mobility. Mobile IPv6, along with fast-handoffs and context transfer mechanisms will be essential for the large scale deployment of real-time services such as VoIP and broadcast services. 

Mobile IP provides location-independent routing of IP datagrams over the internet and this is of enormous benefit to corporate organisations for example, that have staff moving around different segments of the corporate network and need to be connected to the Internet/Intranet at their locations. This is made possible by the Mobile IP feature that identifies each node by its home IP address, regardless of its location on the Internet. In this way, Mobile IP provides an efficient, scalable mechanism for roaming on the internet. 

IPv6 was originally called ‘Next Generation IP’ while the debate about the exhaustion of IPv4 addresses raged, and the most compelling reason for the design and adoption of IPv6 was the need to tackle this ‘address exhaustion issue’. During this interregnum, there was a solution that was adopted to reduce the rate of depletion of the IPv4 address stock, and this was the Network Address Translation (NAT) technique, and in our next lecture, we will look at NAT in the era of IPv6. 

Till then, whatever you are routing, be it voice or data or video, may it be successful!

DHCPv6

In our last paper, we looked at the Stateless Autoconfiguration features of IPv6 and the advantages it brings to network administration particularly; however, not everyone in the IP community feels Stateless Autoconfiguration is the answer to all configuration challenges in IPv6. A growing number of IPv6 experts are apprehensive about the adoption of the auto-configuration feature offered by IPv6 in contrast to the services offered by the existing DHCPv6 protocol in the task of configuration of connected devices over an IP network. There are concerns over the potential disadvantages of auto-configuration in IPv6 such as its focus on configuration of IP address while overlooking the configuration of other parameters such as the DNS domain, DNS server, time servers, legacy WINS servers etc. While most of the changes that IPv6 brings impact technologies at the lower layers of the TCP/IP architectural model, the significance of the modifications means that many other TCP/IP protocols are also affected. This is particularly true of protocols that work with addresses or configuration information, including DHCP. For this reason, a new version of DHCP is required for IPv6.

An important feature of IPv6 is that it allows plug and play option to the network devices by allowing them to configure themselves independently. It is possible to plug a node into an IPv6 network without requiring any human intervention. This feature was critical to allow network connectivity to an increasing number of mobile devices.

The proliferation of network enabled mobile devices has introduced the requirements of a mobile device to arbitrarily change locations on an IPv6 network while still maintaining its existing connections. To offer this functionality, a mobile device is assigned a home address where it remains always reachable. When the mobile device is at home, it connects to the home link and makes use of its home address. When the mobile device is away from home, a home agent (router) acts as a conduit and relays messages between the mobile device and other devices on the network to maintain the connection
Apart from the IP addresses, the additional information supplied by DHCPv6 offers the audit, tracking and management capabilities as required by the business enterprises. Despite its present shortcomings, IPv6 offers the most comprehensive long-term solution for the future networking requirements of the business enterprises. Every network administration policy maker across different business enterprises faces the dilemma of using IPv6 auto-configuration versus DHCPv6.

Basically, Dynamic Host Configuration Protocol (DHCP) is a network application protocol used by devices (DHCP clients) to obtain configuration information for operation in an Internet Protocol network. When a DHCP-configured client (be it a computer or any other network-aware device) connects to a network, the DHCP client sends a broadcast query requesting necessary information from a DHCP server. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as the default gateway, the domain name, the DNS servers, other servers such as time servers, and so forth. Upon receipt of a valid request the server will assign the computer an IP address, a lease (the length of time for which the allocation is valid), and other IP configuration parameters, such as the subnet mask and the default gateway. The query is typically initiated immediately after booting and must be completed before the client can initiate IP-based communication with other hosts. This protocol reduces system administration workload, allowing devices to be added to the network with little or no manual intervention.

DHCPv6 is the Dynamic Host Configuration Protocol for IPv6 (RFC 3315). Although IPv6's stateless address autoconfiguration removes the primary motivation for DHCP in IPv4, DHCPv6 can still be used to statefully assign addresses if the network administrator desires more control over addressing.


According to RFC 3315, “the Dynamic Host Configuration Protocol for IPv6 (DHCP) enables DHCP sservers to pass configuration parameters such as IPv6 network addresses to IPv6 nodes. It offers the capability of automatic allocation of reusable network addresses and additional configuration flexibility. This protocol is a stateful counterpart to "IPv6 Stateless Address Autoconfiguration" (RFC 2462), and can be used separately or concurrently with the latter to obtain configuration parameters.

So how is DHCPv6 different from DHCPv4? Firstly, DHCPv4 is based on an earlier protocol called BOOTP (Bootstrap Protocol, a network protocol used by a network client to obtain an IP address from a configuration server. The BOOTP protocol was originally defined in RFC 951); this packet layout is wasteful in a lot of cases. Secondly, IPv6 greatly improves DHCPv6, especially in two major areas:
1. IPv6 hosts have “link-local addresses”; every network interface has a unique address that can be used to send and receive information and IPv6 hosts can use this to send requests for “real addresses”. (See our earlier discussions on Stateless Autoconfiguration). For IPv4, system-specific hacks have to come into play before they can have an address.
2. All IPv6 systems support multicasting.
Thirdly, one exchange configures all interfaces in IPv6; a single DHCPv6 request may include all interfaces on a client, and this allows the server to offer addresses to all interfaces on a client, while each interface may also have different options. Fourthly, DHCPv6 allows normal address allocation as well as temporary address allocation. While in a sense all addresses are “temporary” as they are leased for a time period (which may be infinity), in the automatic (also called as DHCP Reservation) mode, an IP address is chosen from the range defined by the network administrator and permanently assigned to the client.

The IPv6 Stateless Autoconfiguration versus DHCPv6 option is a hotly debatable contemporary issue in the networking domain since both the standards are being simultaneously used in conjunction with each other. While DHCPv6 offers dedicated configuration mechanism catering to all the information needs in the form of required parameters to the network devices, IPv6 auto-configuration simplifies the configuration process in a streamlined manner. While DHCPv6 offers a more comprehensible solution to the configuration needs of a device over an IPv6 network, the auto-configuration feature makes the whole process much simpler, streamlined and future-proof.

In IPv4 networks, there are two ways to assign addresses – static and DHCPv4. In IPv6, there are three ways – static, DHCPv6, and autoconfiguration. Just like for IPv4, some deployments lend themselves to one host provisioning mechanism and some another. IPv6 simply offers network designers more choices, where autoconfiguration is a valuable addition to the set. Autoconfiguration is not better or worse than DHCPv6 – just used in different use cases.

At present, the auto-configuration feature does not offer much beyond IP addressing but the feature is hardwired into the IPv6 protocol and does away with the need of using any other standard leading to streamlining of the configuration process thereby removing any scope for future compatibility issues among different protocols. DHCPv6 is an excellent short-term solution while IPv6 auto-configuration, in an evolved form is in for long haul. While at present we see a majority of network administrators swearing by the benefits of DHCPv6, the auto-configuration feature ingrained in IPv6 feature may soon outweigh the advantages offered by DHCPv6 to become the de facto standard for the configuration of devices over an IPv6 network.

In many instances, the stateless Autoconfiguration features will attract a small- to medium-sized company without any sophisticated network administration needs, while also allowing staff members to float in and out of the network as required without too much work on the part of the network administration. However, in large companies, a case may be made for going “stateful”, with all the tweaking advantages of DHCPv6! You can get configuration parameters with stateless DHCPv6, so you do not need stateful. Autoconfiguration is a great thing for home networks, simple office setups, as well as public networks because it requires almost no user intervention. That means that even grandma can configure her IPv6 wireless router at home, while on the other hand DHCPv6 requires an administrator that knows what he wants and what he offers.
I believe the big advantage a stateful-DHCPv6 system is going to offer is simply an awareness of how many devices are active on the network. But it is early days yet and we will not know for a few years, as implementation of IPv6 progresses, however slowly. The thing to bear in mind with IPv4 and IPv6 is that there is a problem that IPv6 solves that IPv4 does not: it gives you more addresses. This is an issue for corporations, and it is an issue for countries with large populations and small IPv4 allocations.
It is important that I asseverate that the issue of Stateless Autoconfiguration and DHCPv6 is not a case of “either/or” : they can both, and are generally are, deployed on the same network gainfully. In many situations stateless autoconfiguration is adequate. When additional configuration options are required or when an organization prefers stateful configuration, DHCPv6 can be employed. These mechanisms are complementary and are in no way in conflict


One of the thrills of IPv6 and Stateless Autoconfiguration is the ability of mobile equipment and other peripherals to get on the network with little or no human intervention, and there is a particular protocol that makes this possible, and that is Mobile IPv6, which will be the subject of our discussion next week!

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

Stateless Auto Configuration in IPv6

IPv6 includes an interesting feature called Stateless Address Autoconfiguration, which allows a host to actually determine its own IPv6 address from its layer two address by following a special procedure. 

Since 1993 the Dynamic Host Configuration Protocol (DHCP) has allowed systems to obtain an IPv4 address as well as other information such as the default router or Domain Name System (DNS) server. A similar protocol called DHCPv6 has been published for IPv6, the next version of the IP protocol. However, IPv6 also has a stateless autoconfiguration protocol, which has no equivalent in IPv4. 

IPv6 defines both a stateful and stateless address autoconfiguration mechanism. Stateless autoconfiguration requires no manual configuration of hosts, minimal (if any) configuration of routers,and no additional servers.  The stateless mechanism allows a host to generate its own addresses using a  combination of locally available  information and information advertised by routers. Routers advertise

prefixes that identify the subnet(s) associated with a link, while hosts generate an "interface identifier" that uniquely identifies an  interface on a subnet. An address is formed by combining the two. In the absence of routers, a host can only generate link-local addresses. However, link-local addresses are sufficient for allowing communication among nodes attached to the same link. 

In the stateful autoconfiguration model, hosts obtain interface addresses and/or configuration information and parameters from a server.  Servers maintain a database that keeps track of which

addresses have been assigned to which hosts. The stateful autoconfiguration protocol allows hosts to obtain addresses, other configuration information or both from a server.  Stateless and stateful autoconfiguration complement each other. For example, a host can use stateless autoconfiguration to configure its own addresses, but use stateful autoconfiguration to obtain other information.

Stateful autoconfiguration for IPv6 is the subject of future work [DHCPv6]. 

The stateless approach is used when a site is not particularly concerned with the exact addresses hosts use, so long as they are unique and properly routable. The stateful approach is used when a

site requires tighter control over exact address assignments. Both stateful and stateless address autoconfiguration may be used simultaneously.  The site administrator specifies which type of

autoconfiguration to use through the setting of appropriate fields in Router Advertisement messages [DISCOVERY]. . . . (RFC 2462) 

Stateless Auto Configuration is an important feature offered by the IPv6 protocol. It allows the various devices attached to an IPv6 network to connect to the Internet using the Stateless Auto Configuration without requiring any intermediate IP support in the form of a Dynamic Host Configuration Protocol (DHCP) server. A DHCP server holds a pool of IP addresses that are dynamically assigned for a specified amount of time to the requesting node in a Local Area Network (LAN). 

In IPv4, it was possible for machines to automatically find out their IP address using BOOTP, and to have one dynamically assigned using DHCP, but these were relatively rare, and required non-trivial effort to set up. In IPv6, by contrast, automatic configuration is expected to be the norm.

The IPv6 autoconfiguration and renumbering feature is defined in RFC 2462, IPv6 Stateless Address Autoconfiguration. The word “stateless” contrasts this method to the server-based method using something like DHCPv6, which is called “stateful”. This method is called “stateless” because it begins from a “dead start” with no information (or “state”) at all for the host to work with, and has no need for a DHCP server.

One of the most interesting and potentially valuable addressing features implemented in IPv6 is a facility to allow devices on an IPv6 to actually configure themselves independently. In IPv4 hosts were originally configured manually. Later, host configuration protocols like DHCP enabled servers to allocate IP addresses to hosts that joined the network. IPv6 takes this a step further, by defining a method for some devices to automatically configure their IP address and other parameters without the need for a server. It also defines a method whereby the IP addresses on a network can be renumbered (changed en masse). These are the sorts of features that make TCP/IP network administrators drool. 

Stateless Auto Configuration is a boon for the Network Administrators since it has automated the IP address configuration of individual network devices. Earlier, configuration of the IP addresses was a manual process requiring support of a DHCP server. However, IPv6 allows the network devices to automatically acquire IP addresses and also has provision for renumbering/reallocation of the IP addresses en masse. With a rapid increase in the number of network devices connected to the Internet, this feature was long overdue. It simplifies the process of IP address allocation by doing away with the need of DHCP servers and also allows a more streamlined assignment of network addresses thereby facilitating unique identification of network devices over the Internet. 

The auto configuration and renumbering features of Internet Protocol version 6 are defined in RFC 2462. The word "stateless" is derived from the fact that this method doesn't require the host to be aware of its present state so as to be assigned an IP address by the DHCP server. The stateless auto configuration process comprises of the following steps undertaken by a network device: 

·          

·         Link-Local Address Generation - The device is assigned a link-local address. It comprises of '1111111010' as the first ten bits followed by 54 zeroes and a 64 bit interface identifier.

 

·          

·         Link-Local Address Uniqueness Test - In this step, the networked device ensures that the link-local address generated by it is not already used by any other device i.e. the address is tested for its uniqueness. 

 

·          

·         Link-Local Address Assignment - Once the uniqueness test is cleared, the IP interface is assigned the link local address. The address becomes usable on the local network but not over the Internet.

 

·          

·         Router Contact - The networked device makes contact with a local router to determine its next course of action in the auto configuration process.

 

·          

·         Router Direction - The node receives specific directions from the router on its next course of action in the auto configuration process.

 

·          

·         Global Address Configuration - The host configures itself with its globally unique Internet address. The address comprises of a network prefix provided by the router together with the device identifier.

 

Neighbour Discovery

The Neighbour Discovery Protocol or NDP in the IPv6 is an improvement over the Internet Control Message Protocol (ICMP). It is essentially a messaging protocol that facilitates the discovery of neighbouring devices over a network. The NDP uses two kinds of addresses: unicast addresses and multicast addresses. (
A unicast address refers to a unique interface. A packet sent to such an address is treated by the corresponding interface—and only by this interface. This type of address is directly opposed to the multicast address type that designates a group of interfaces).

 The Neighbour Discovery protocol performs nine specific tasks that are divided into three functional groups:

IPv6

Advantages of Stateless Auto Configuration

1.Doesn't require support of a DHCP server - Stateless Auto Configuration does away with the need of a DHCP server to allocate IP addresses to the individual nodes connected to the Local Area Network (LAN). 

2. Allows hot plugging of network devices - The network devices can be 'hot-plugged' to the Internet. Since the devices can configure their own IP addresses, there is no need for manual configuration of the network devices. The devices can be simply connected to the network and they automatically configure themselves to be used over an IPv6 network. 

3.Suitable for applications requiring secure connection without additional intermediaries in the form of a proxy or a DHCP server - Some of the modern day applications such as teleconferencing require a fast and secure connection sans any intermediary nodes that tend to slow down the communication process. Stateless Auto Configuration helps meet such requirements by removing the intermediary proxy or DHCP servers and thereby facilitating the communication process for such applications requiring high-speed data transfers. 

4.Cost effective - By facilitating the networking potential of individual nodes and doing away with the requirement of proxy or DHCP servers, Stateless Auto Configuration offers cost effective means to connect the various network devices to the Internet. 

5.Suitable for wireless networks - Stateless auto configuration is most suited to the wireless environment where the physical network resources are spatially scattered within a geographical area. By allowing direct hot plugging to the network, it reduces an additional link in the wireless network.

Applications of Stateless Auto Configuration 

The Stateless Auto Configuration feature was long awaited to facilitate effortless networking of various devices to the Internet. The feature assumes even greater significance for use over the wireless networks. It allows the various devices to access the network from anywhere within a 'hotspot'. Stateless Auto Configuration finds diverse applications in networking electronic devices such as televisions, washing machines, refrigerators, microwaves etc. to the Internet. The ease of network connectivity through 'hot plugging' of such devices will usher in a new era of convergence where majority of the electronic devices will be connected to the Internet.

Clearly, this method has numerous advantages over both manual and server-based configuration. It is particularly helpful in supporting mobility of IP devices, as they can move to new networks and get a valid address without any knowledge of local servers or network prefixes. At the same time, it still allows management of IP addresses using the (IPv6-compatible) version of DHCP if that is desired. Routers on the local network will typically tell hosts which type of autoconfiguration is supported using special flags in ICMPv6 Router Advertisement messages.

DHCPv6  is closely aligned with Stateless Autoconfiguration and we will take a closer look at this interesting feature of IPv6 next week!

Till then, whatever you are routing, be it voice or data or video, may it be successful!

Routing in IPv6  

Voice over Internet Packet (VoIP) is the technology for transmitting voice conversations over a data network using the Internet Protocol. Such network may be the Internet or a corporate network. The voice data flows over a general-purpose packet-switched network, instead of traditional dedicated, circuit-switched voice transmission lines.  

While VoIP is the more well known technology, there are other technologies used to transmit voice conversations, including Voice over Frame Relay (VoFR) and Voice over ATM (VoATM); the ‘”Internet Protocol” is simply a catch-all for the protocols and technology of encoding a voice call that allow the voice call to be slotted in between data calls on a data network. Such data network may be the Internet, a corporate Intranet, or a managed network used by a long distance and international traditional provider. 

Although voice over IP (VoIP) has been in existence for many years, it has only recently begun to take off as a viable alternative to traditional public switched telephone networks (PSTN). Interest and acceptance has been driven by the attractive cost efficiencies that organizations can achieve by leveraging a single IP network to support both data and voice. But cost is not enough to complete the evolution; service and feature parity is a main requirement. Customers will not accept voice quality or services that are less than what they are used to with a PSTN and, until now, VoIP fell short in delivery. 

Today, voice protocols have developed to offer a richer set of features, scalability and standardization than what was available only a few years ago. The pace of service integration (convergence) with new and existing networks continues to increase as VoIP products and services develop. Critical to success is the ability to deploy value-added and high-margin services. For example, a service provider can deploy a unified messaging system that synthesizes voice and e-mails over a phone to the subscriber. 

Currently, VoIP is playing a vital role in replacing today’s (TDM-based) telephony infrastructure. The initial interest in VoIP was facilitated by the promise of savings by using VoIP. Enterprises seek savings on toll by-pass and equipment, through the operation of a combined voice/data network, while consumers want low-cost voice facilities. Other great benefit of VoIP, even though they are yet to be fully implemented, include mobility, unified messaging or presence-related communication function. 

VoIP (read : packet-switched technology) is generally better at carrying more traffic than circuit-switched networks, thanks to statistical multiplexing’s inherently better use of available bandwidth. Since VoIP is based on open standards (e.g. H.323 and SIP), its application and development has been greatly enhanced by improvements in technologies related to those standards. 

One of the major arguments against VoIP is the lack of Quality of Service (QoS). Unfortunately, IP, with a connectionless, best-effort delivery model, does not guarantee delivery of packets, in a timely fashion, or at all! In order to deploy real-time applications over IP networks with an acceptable level of quality, certain bandwidth latency and jitter requirements must be guaranteed, and must be met in a manner that allows multimedia traffic to co-exist with traditional data traffic on the same network.

 

Technically-speaking, the current IPv4-based IP network does not have built-in Quality of service (QoS) and, therefore, several quality (latency, jitter, echo, etc.) issues arise. For example, quality of a voice call can degrade significantly if IP (voice) packets are lost or delayed at any point in the network between VoIP users. Users can also notice this quality degradation more in highly congested networks or over long distances. To ameliorate this situation, a number of approaches have been devbiced for QoS in IPv4-based networks, including”

n      Integrated Services architecture (Int-Serv)

n      Differentiated Services (Diff-Serv)

n      802.1p Prioritization

 

n      Integrated Services Architecture (Int-Serv)

 

This includes specifications to reserve network resources in support of a specific application. Using the Resource Reservation Protocol (RSVP), the application or user can request and allocate sufficient bandwidth, to support the short- or long-term connection. This is a partial solution because Int-Serv does not scale well, as each networking device (routers and switches) must maintain and manage the information for each flow established across the path. RSVP works with IP addressing, and QoS requests are propagated to all switches and routers. The idea is that for a premium price, RSVP will enable certain service (e.g.. Videoconferencing), to be delivered before say, email.Videoconferencing), to be delivered before say, email. 

n      Differentiated Service (Diff-Serv)

Diff-Serv is easier to use than Int-Serv. It implements a  different mechanism to handle the flow across the network. Instead of trying to manage individual flows and per-flow signaling needs like Int-Serv, Diff-Serv uses DS-bits in the header of the packet to recognize the flow and the need for QoS on a particular datagram-by-datagram basis. This is more scalable than Int-Serv, and does not rely on RSVP to control flows.

On the basis of a DS marker in the header of each IP packet, the network routers will apply differentiated service grades to various packets or packet streams, forwarding them according different Per-Hop-Behaviours (PHB).

n      802.1p Prioritization

This is the IEEE standard that defines a priority scheme for the layer 2 switching in a switched LAN. When a packet leaves a sub network or a domain, 802.1p priority can be mapped to Diff-Serv to satisfy the Layer 2 switching demands across the network.

802.1p is an extension of 802.1D and is a MAC-layer specification for filtering and expediting multicast traffic This is achieved through the addition of a 3-bit priority value in the frame header. Switches that support 802.1p provide a framework for bandwidth prioritisation. 

In order to address the major issues of QoS in VoIP, the next generation VoIP technology will utilize IPv6 that ensures QoS, a set of service requirements that deliver performance guarantee while transporting traffic, including voice. IPv6 brings quality of service that is required for several new applications such as IP telephony, video/audio, interactive games or ecommerce. Whereas IPv4 is a best effort service, IPv6 ensures QoS, a set of service requirements to deliver performance guarantee while transporting traffic over the network.

IPv6 implements QoS with the help of classification and marking of IP packets to ensure reliable VoIP infrastructure. With the help of classification  and marking technique, the network can identify packets or traffic flows and then \assign certain parameters within the packet headers  in order to group them. To implement QoS marking, IPv6 provides a traffic-class field (8-bits) in the IPv6 header; it also has a 20-bit flow label.

Next week, we will examine the Stateless Auto-Configuration feature of IPv6IPSec.

Till then, whatever you are routing, be it voice or data or video, may it be successful!

Virtual Private Networks and IPv6

 There are many definitions of a Virtual Private Network (VPN), but we will apply the generic one here for our purposes, and it does capture the essence of the technology; thus a VPN is a packet data network service offering with some of the characteristics of a private network. A pedestrian definition of VPN will be connecting two private networks through the public or shared network that is the Internet. Any packet data network can be used as the basis for such a VPN, including X.25, TCP/IP, Frame Relay, and ATM networks. VPN’s are available in every guise in telecommunication and the Internet, but the basic underlying principles are the same. In contemporary usage, VPN commonly refers to an IP VPN running over public Internet. While the ubiquitous nature of the Internet is a huge advantage for data networking. The Internet is inherently both insecure and subject to variable levels of congestion.

 A VPN is not a private network, but virtually so; which means that it exhibits some of the characteristics of a private network, even though it uses the resources of a public switched network, whether it is the telephone system or the Internet. For example, a VPN might offer priority access to bandwidth and other network resources, whereas a private network offers guaranteed access at all times. In concluding our introduction to VPN, let us take the pieces and describe them in plain language, as follows :

·         Virtual : Virtual means almost; not quite, or not real. Real circuits are physical; you can see and touch them, but virtual circuits look and act as if they are “there”, but they really are not!

·         Private : Private means that it is dedicated to a specific organization or individual and not available to the public.

·         Network : A network is the grouping of circuits that connect the various locations between and among each other.

 

In order to create a VPN over the Internet, security issues are mitigated with a combination of authentication, encryption, and tunneling. Authentication is a means of access control that confirms the identity of users through password protection or intelligent tokens, thereby reducing the possibility that unauthorized users may gain access to privileged internal computing or network resources. Encryption is the process of encoding, or scrambling, of the data payload prior to transmission in order to secure it; the decryption process depends on the receiver’s possession of the correct key to unlock the safety mechanism. The key is known only to the transmitting and receiving devices. Tunneling is the process of encapsulating the encrypted payload in an IP packet for secure transmission. Tunneling protocols include SOCKv5. Point-to-Point Tunneling protocol (PPTPI, Layer 2 Tunneling Protocol (L2TP) and IP Security (IPsec).

 

The application scenarios for IP VPN’s include remote access, intranets, and extranets. Remote access VPPN’s are highly effective in support of telecommuters, mobile workers, and virtual employees. Intranets are used to link branch, regional, and corporate offices. Extranets link vendors, affiliates, distributors, agents, and strategic partners into the main corporate office, with the level of access afforded being sensitive to the level of privilege indicated by a combination of password and user ID, as properly authenticated.

 

In most cases, VPN can be grouped into two major implementation types, namely :remote site VPNs and site-to-site VPNs. Remote site VPN’s are usually used to link private networks from various remote locations, while site-to-site VPN’s are used to connect a branch office to a corporate headquarters network. Both the intranet andextranet are components of site-to-site VPN’s.

 

Having gone through a short introduction to VPN, we now need to see how VPN is implemented in IPv6. Today, the most common implementation of VPN in IPv6 utilises the built-in facilities of IPsec, which was the subject of our last discussion.

 

Compliance with IPSec is mandatory in IPv6, and IPSec is actually a part of the IPv6 protocol. IPv6 provides header extensions that ease the implementation of encryption, authentication, and Virtual Private Networks (VPNs). IPSec functionality is basically identical in IPv6 and IPv4, but one benefit of IPv6 is that IPSec can be utilized along the entire route, from source to destination. According to RFC2401, which specifies the base architecture for IPsec, “the goal of the architecture is to provide various security services for traffic at the IP layer, in both IPv4 and IPv6 environments”. It is therefore not surprising that while PPTP or L2TP may be implemented in IPv6, IPsec remains the most native format for VPN implementation in IPv6, simply based on the built-in facilities in IPsec and the fact that it operates at the IP layer (layer 3 of the 7-layer ISO Open System Interconnection Model)

Next week, we will explore how VoIP (Voice over Internet Protocol) is implemented in IPv6.

Till then, whatever you are routing, be it voice or data or video, may it be successful!

IPSec generally, and IPv6 

Internet Protocol security (IPsec) is a framework of open standards for protecting communications over Internet Protocol (IP) networks using cryptographic security services. IPsec supports network-level peer authentication, data origin authentication, data integrity

IPsec operates at the Internet Layer of the Internet protocol Suite (commonly known as TCP/IP), which is approximately Layer 3 of the OSI Model, which is also the layer you will find IP, ICMP, IGMP, etc. Unlike IPsec, some other Internet security systems like SSL, TLS, and SSH, operate at higher Application layer; however, IPsec is more versatile as it operates at a lower level, and can thus protect more traffic, specifically everything above layer two (in the Internet Protocol Suite) and level three (in the OSI Model). What this means is that applications need not b` designed to use IPsec, whereas the use of TTL/SSL requires specific incorporation of their design into applications at that level.

IPsec utilizes a number of protocols to perform various functions, specifically :
• Internet Key Exchange (IKE and IKEv2) : is the mechanism that is used to set up Security Association (SA) between two entities in an IP-based VPN (Virtual Private Network) application. IKE sets up a secure tunnel between the entities, authenticating their identities; negotiating SA’s, and exchanging shared key material between them in order that data can be encrypted and decrypted by those with privileged data access.
• Authenticated Header (AH) : this IPsec header is used to verify that the contents of a packet have not been modified while the packet was in transit.
• Encapsulating Security Payload (ESP) : this is the portion of IPsec used to provide data privacy, data origin authentication, and connectionless integrity. 

Generally, IPsec can operate in two modes : transport mode (host-to-host), and tunnel mode (gateway-to-gateway or gateway-to-host). With transport mode, only the payload (the actual data you are transferring) of the IP packet is encrypted and/or authenticated. Since the IP header is neither modified nor encrypted, the routing is unaffected. In tunnel mode, both the data and IP header are encrypted and/or authenticated. This is then encapsulated into a new IP packet with a new IP header. This is obviously a stronger application of encryption and more involved, and as such is the method for creating Virtual Private Networks for network-to-network communications (as in router to link sites), host-to-network communications (as in remote user access), and host-to-host communications (as in private chat).

IPsec performs security functions of encryption and non-repudiation protection on IP layer. IPsec standard itself supports both IPv4 and IPv6, but in IPv6, IPsec is defined as a mandatory feature. Accordingly, the IPsec security model is required to be supported for all IPv6 implementations in near future. In IPv6, IPsec is implemented using the AH authentication header and the ESP extension header. Since presently, IPv4 IPsec is available in nearly all client and server OS platforms, the IPSec IPv6 advanced security can be deployed by IT administrators immediately, without changing applications or networks. The importance of IPsec in IPv6 has grown in recent years as U.S. Department of Defense and federal government have mandates to buy IPv6-capable systems and to transition to IPv6-capable networks within a few years.

Next week, we will look at the implementation of Virtual Private Networks (VPN) in IPv6.

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

IPSec and IPv6

IPSec is a framework of open standards (from IETF) that define policies for secure communication in a network. In addition, these standards also describe how to enforce these policies. Basically, IPSec is a collection of IP security measures that comprise an optional tunneling protocol for IPv6.

 

Using IPSec, participating peers (computers or machines) can achieve data confidentiality, data integrity, and data authentication at the network layer (i.e. Layer 3 of the Open Systems Interconnection 7-layer networking model). RFC 2401 specifies the base architecture for IPSec compliant systems. This RFC says "the goal of the architecture is to provide various security services for traffic at the IP layer, in both the IPv4 and IPv6 environments." See also RFC 2402, RFC 2406 and RFC 2407 for more details on IPSec. The main purpose of IPSec is to provide interoperable, high quality, cryptographically-based security for IPv4 and IPv6. It offers various security services at the IP layer and therefore, offers protection at this (i.e. IP) and higher layers. These security services are, for example, access control, connectionless integrity, data origin authentication, protection against replays (a form of partial sequence integrity), confidentiality (encryption), and limited traffic flow confidentiality.

 

Specifically, IPSec supports:

 

 

·         Data Encryption Standard (DES) 56-bit and Triple DES (3DES) 168-bit symmetric key encryption algorithms in IPSec client software.

 

 

·         Certificate authorities and Internet Key Exchange (IKE) negotiation. IKE is defined in RFC 2409.

 

 

·         Encryption that can be deployed in standalone environments between clients, routers, and firewalls

 

 

·         Environments where it's used in conjunction with L2TP tunnelling.

From usage point of view, here are three main advantages of IPSec:

 

·          

Supported on various operating system platforms

 

·          

Right VPN solution, if you want true data confidentiality for your networks.

 

·          

Open standard, so interoperability between different devices is easy to implement

 

IPSec has two different modes: Transport mode (host-to-host) and Tunnel Mode (Gateway-to-Gateway or Gateway-to-host). In transport mode, the payload is encapsulated (header is left intact) and the end-host (to which, the IP packet is addressed) decapsulates the packet. In the tunnel mode, the IP packet is entirely encapsulated (with a new header). The host (or gateway), specified in the new IP header, decapsulates the packet. Note that, in tunnel mode, there is no need for client software to run on the gateway and the communication between client systems and gateways are not protected.

 

IPSec supports authentication through an “authentication header” which is used to verify the validity of the originating address in the header of every packet stream. An “encapsulating security payload” header encrypts the entire datagram, based on the encryption algorithm chosen by the implementer.

 

IPSec traditionally implements secure remote access connections using virtual private network (VPN) tunneling protocols such as Layer 2 Tunneling Protocol (L2TP). Note that IPSec is not really a VPN mechanism. In fact, the use of IPSec is changing n the last few years, since IPSec is moving from the WAN into the LAN to secure internal network traffic against eavesdropping and modification.

When two computers (peers) want to communicate using IPSec, they mutually authenticate with each other first and then negotiate how to encrypt and digitally sign traffic they exchange. These IPSec communication sessions are called security associations (SAs).

 

IPSec is a mandatory component for IPv6, and therefore, the IPSec security model is required to be supported for all IPv6 implementations in near future. In IPv6, IPSec is implemented using the AH authentication header and the ESP extension header. Since at the present moment, IPv4 IPSec is available in nearly all client and server OS platforms, the IPSec IPv6 advanced security can be deployed by IT administrators immediately, without changing applications or networks. The importance of IPSec in IPv6 has grown in recent years as U.S. Department of Defense and federal government have mandates to buy IPv6-capable systems and to transition to IPv6-capable networks within a few years,

Next week, we will look at the implementation of Virtual Private Networks (VPN) in IPv6.

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

Routing in IPv6

 Routing is the process of forwarding packets between connected network segments. For IPv6-based networks, routing is the part of IPv6 that provides forwarding capabilities between hosts that are located on separate segments within a larger IPv6-based network.  

IPv6 is the mailroom in which IPv6 data sorting and delivery occur. Each incoming or outgoing packet is called an IPv6 packet. An IPv6 packet contains both the source address of the sending host and the destination address of the receiving host. Unlike link-layer addresses, IPv6 addresses in the IPv6 header typically remain the same as the packet travels across an IPv6 network. Routing is the primary function of IPv6. IPv6 packets are exchanged and processed on each host by using IPv6 at the Internet layer.

Above the IPv6 layer, transport services on the source host pass data in the form of TCP segments or UDP messages down to the IPv6 layer. The IPv6 layer creates IPv6 packets with source and destination address information that is used to route the data through the network. The IPv6 layer then passes packets down to the link layer, where IPv6 packets are converted into frames for transmission over network-specific media on a physical network. This process occurs in reverse order on the destination host.

IPv6 layer services on each sending host examine the destination address of each packet, compare this address to a locally maintained routing table, and then determine what additional forwarding is required. IPv6 routers are attached to two or more IPv6 network segments that are enabled to forward packets between them. 

IPv6 uses the same types of routing protocols used in IPv4 networks, but with a few modifications to account for specific IPv6 requirements. Routing in IPv6 is almost identical to IPv4 routing under Classless Inter-Domain Routing (CIDR) except that the addresses are 128- bit IPv6 addresses instead of 32-bit IPv4 addresses. With very straightforward extensions, all of IPv4's routing algorithms (OSPF, RIP, IDRP, ISIS, etc.) can used to route IPv6.

IPv6 also includes simple routing extensions which support powerful new routing functionality. These capabilities include:

  • Provider Selection (based on policy, performance, cost, etc.)
  • Host Mobility (route to current location)
  • Auto-Readdressing (route to new address)

The new routing functionality is obtained by creating sequences of IPv6 addresses using the IPv6 Routing option. The routing option is used by a IPv6 source to list one or more intermediate nodes (or topological group) to be "visited" on the way to a packet's destination. This function is very similar in function to IPv4's Loose Source and Record Route option.

In order to make address sequences a general function, IPv6 hosts are required in most cases to reverse routes in a packet it receives (if the packet was successfully authenticated using the IPv6 Authentication Header) containing address sequences in order to return the packet to its originator. This approach is taken to make IPv6 host implementations from the start support the handling and reversal of source routes. This is the key for allowing them to work with hosts which implement the new features such as provider selection or extended addresses. The address sequence facility of IPv6 can be used for provider selection, mobility, and readdressing. It is a simple but powerful capability.

In summary:

·         IPv6 is similar to routing IPv4 with CIDR but with the flexibility that 128-bit addresses allow;
·
         There is only a minimal modification to dynamic routing protocols (OSPF, IDRP, RIP, IS-IS, BGP) in order to work with IPv6 address format;
·
         There is improved source routing options (routing header0, which is great for provider selection, mobility, etc. in IPv6.

Next week, we will examine IPSec as implemented in IPv6.

Till then, whatever you are routing, be it voice or data or video, may it be successful!

Features and Differences Between IPv6 and IPv4

Largely, IPv6 is a conservative extension of IPv4. Most transport- and application-layer protocols need little or no change to work over IPv6, with the exceptions of applications protocols that embed network-layer addresses (such as FTP or NTPv3).

IPv6, as a protocol architecture, is not a radical departure from the architecture of IPv4. The same datagram delivery model is used, with the same minimal set of assumptions about the underlying network capabilities, and the same decoupling of the routing and forwarding capabilities.

However, IPv6 specifies a new packet format, designed to minimize packet-header processing. Since the headers of IPv4 and IPv6 are significantly different, the two protocols are not interoperable and most upgrade paths will be forklift upgrades in most cases.

Larger address space
IPv6 features a larger address space than that of IPv4: addresses in IPv6 are 128 bits long versus 32 bits in IPv4. The very large IPv6 address space supports a total of 2128 (about 3.4×1038) addresses. In a different perspective, this is 252 addresses for every observable star in the known universe. (Do not be surprised to see your telephone assigned an IP address in the future!)

The longer addresses allow a better, systematic, hierarchical allocation of addresses and efficient route aggregation. With IPv4, complex Classless Inter-Domain Routing (CIDR) techniques were developed to make the best use of the small address space. (CIRD is an internetworking routing protocol; it is a way of using the existing 32-bit Internet address space more efficiently commonly used by Internet Service Providers.). The Internet Engineering Task Force (IETF) ROuting and ADdressing (ROAD) study of 1991 recognized that the IPv4 address space was always going to be completely consumed at some point in the future of the Internet.

Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4, as discussed in RFC 2071 and RFC 2072. With IPv6, however, changing the prefix in a few routers can renumber an entire network ad hoc, because the host identifiers (the least-significant 64 bits of an address) are decoupled from the subnet identifiers and the network provider's routing prefix.

The size of a subnet in IPv6 is 264 addresses (64-bit subnet mask); the square of the size of the entire IPv4 Internet. Thus, actual address space utilization rates will likely be small in IPv6, but network management and routing will be more efficient.

Confronting IPv4 exhaustion is a hot button topic within the IP and Internet community right now, especially in view of the low implementation rate of IPv6 today.
Stateless address auto configuration
Since 1993, the Dynamic Host Configuration Protocol (DHCP) has allowed systems to obtain an IPv4 address as well as other information such as the default router and Domain Name System (DNS) server. A similar protocol called DHCPv6 has been published for IPv6; however, IPv6 also has a stateless sub configuration protocol, which has no equivalent in IPv4.

This is an interesting aspect of IPv6.Although in most regards, IPv6 is still IP and works pretty much the same as IPv4, the new protocol departs from IPv4 in some ways. With IPv4, you need a DHCP server to tell you your address if you do not want to resort to manual configuration. This works very well if there is a single DHCP server, but not so much when there is more than one and they supply conflicting information. It can also be hard to get a system to have the same address across reboots with DHCP.

IPv6 hosts can configure themselves automatically when connected to a routed IPv6 network using ICMPv6 router discovery messages. When first connected to a network, a host sends a link-local multicast router solicitation request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters.

If IPv6 stateless address auto configuration (SLAAC) is unsuitable for an application, a host can use stateful configuration (DHCPv6) or be configured manually. Stateless auto configuration is not used by routers. (Stateless protocols are protocols that do not maintain information about a user’s session; each transmission is considered a new session. HTTP is a good example of a stateless protocol.) The good news is that IPv6 will not require Network Address Translation (NAT).

Multicast
Multicast, the ability to send a single packet to multiple destinations, is part of the base specification in IPv6. This is unlike IPv4, where it is optional (although usually implemented). IPv6 does not implement broadcast, the ability to send a packet to all hosts on the attached link; the same effect can be achieved by sending a packet to the link-local all hosts multicast group. (Multicast is communication between a single device and multiple members of a device group.)

Most environments, however, do not currently have their network infrastructures configured to route multicast packets; multicasting on single subnet will work, but global multicasting might not.
Mandatory Network-Layer Security
Internet Protocol Security (IPSec), the protocol for IP encryption and authentication, forms an integral part of the base protocol suite in IPv6. IPSec support is mandatory in IPv6; this is unlike IPv4, where it is optional (but usually implemented). IPSec, however, is not widely used at present except for securing traffic between IPv6 Border Gateway Protocol routers. An “encapsulating security payload” header encrypts the entire datagram, based on the encryption algorithm chosen by the implementer.

IPSec supports authentication through an “authentication header” which is used to verify the validity of the originating address in the header of every packet stream.

Simplified processing by routers
A number of simplifications have been made to the packet header, and the process of packet forwarding has been simplified, in order to make packet processing by routers simpler and hence more efficient. Concretely,
 The packet header in IPv6 is simpler than that used in IPv4, with many rarely-used fields moved to separate options; in effect, although the addresses in IPv6 are four times larger, the (option-less) IPv6 header is only twice the size of the (option-less) IPv4 header.
 The IPv6 header is not protected by a checksum, integrity protection is expected to be assured by a transport-layer checksum. In effect, IPv6 routers do not need to re-compute a checksum when header fields (such as the TTL or Hop Count) change. This improvement may have been made obsolete by the development of routers that perform checksum computation at line speed using dedicated hardware.
 The Time-to-Live field of IPv4 has been renamed to Hop Limit, reflecting the fact that routers are no longer expected to compute the time a packet has spent in a queue.

Mobility
Unlike Mobile IPv4 (MIPv4), Mobile IPv6 (MIPv6) avoids triangular routing and is therefore as efficient as normal IPv6. However, since neither MIPv6 nor MIPv4 are widely deployed today, this advantage is mostly theoretical.

Mobile IPv6 offers many improvements over Mobile IPv4; Mobile IP as a technology permits users to remain connected across wireline networks, while roaming between networks. This permits users to stay connected while on the way to the airport from home, rather than shutting down their personal digital assistant (PDA)/laptop at home, and reconnecting at the WiFi location at the airport.

Options Extensibility
IPv4 has a fixed size (40 bytes) of option parameters. In IPv6, options are implemented as additional extension headers after the IPv6 header, which limits their size only by the size of an entire packet.

Jumbograms
IPv4 limits packets to 64 KB of payload. IPv6 has optional support for packets over this limit, referred to as jumbograms, which can be as large as 4 GB. The use of jumbograms may improve performance over high-MTU networks. The Jumbo Payload Option header indicates the presence of jumbograms.

Next, we will look at IPv6 packet format and IPv6 Addressing.
Till then, whatever you are routing, be it voice or data or video, may it be successful!
 

Historical Background of IPv6

Internet Protocol version 6 (IPv6) is the next-generation Internet Layer protocol for packet-switched internetworks and the Internet. IPv4 is currently the dominant Internet Protocol version, and was the first to receive widespread use. In December 1998, the Internet Engineering Task Force (IETF) designated IPv6 as the successor to version 4 by the publication of a Standards Track specification, RFC 2460. Internet Protocol Version 6 is abbreviated to IPv6 (where the "6" refers to it being assigned version number 6). The previous version of the Internet Protocol is version 4 (referred to as IPv4).

IPv6 is a new version of IP which is designed to be an evolutionary step from IPv4. It is a natural increment to IPv4. It can be installed as a normal software upgrade in internet devices and is interoperable with the current IPv4. Its deployment strategy is designed to not have any flag days or other dependencies. IPv6 is designed to run well on high performance networks (e.g. Gigabit Ethernet, OC-12, ATM, etc.) and at the same time still be efficient for low bandwidth networks (e.g. wireless). In addition, it provides a platform for new internet functionality that will be required in the near future.

IPv6 includes a transition mechanism which is designed to allow users to adopt and deploy IPv6 in a highly diffuse fashion and to provide direct interoperability between IPv4 and IPv6 hosts. The transition to a new version of the Internet Protocol must be incremental, with few or no critical interdependencies, if it is to succeed. The IPv6 transition allows the users to upgrade their hosts to IPv6, and the network operators to deploy IPv6 in routers, with very little coordination between the two.

The first publicly-used version of the Internet Protocol, Version 4 (IPv4), provides an addressing capability of about 4 billion addresses (232). This was deemed sufficient in the early design stages of the Internet when the explosive growth and worldwide distribution of networks was not anticipated.

During the first decade of operation of the TCP/IP-based Internet, by the late 1980s, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the introduction of classless network redesign, it became clear that this would not suffice to prevent IPv4 address exhaustion and that further changes to the Internet infrastructure were needed.[3] By the beginning of 1992, several proposed systems were being circulated, and by the end of 1992, the IETF announced a call for white papers (RFC 1550) and the creation of the "IP Next Generation" (IPng) area of working groups.

There are several key issues that should be considered when reviewing the design of the next generation internet protocol. Some are very straightforward. For example the new protocol must be able to support large global internetworks. Others are less obvious. There must be a clear way to transition the current large installed base of IPv4 systems. It does not matter how good a new protocol is if there is no practical way to transition the current operational systems running IPv4 to the new protocol.


Growth is the basic issue which caused there to be a need for a next generation IP. If anything is to be learned from our experience with IPv4 it is that the addressing and routing must be capable of handling reasonable scenarios of future growth. It is important that we have an understanding of the past growth and where the future growth will come from.

Next week, we will get an introduction to the major differences between IPV4 and IPv6.
Till then, whatever you are routing, be it voice or data or video, may it be successful!
 

 

Intro

 

While most IT practitioners are aware of the importance of the Internet Protocol (IP), recent developments and discussions on and around this all-important protocol makes it mandatory for all practising IT managers to have a more intimate knowledge of the protocol and its attendant uses, limitations, and facilities. Non-practitioners too deserve to have a very good knowledge of the capabilities of the IP phenomenon and its robust application in radically taking all of us to the next level. 

For close to twelve months, CyberschuulNews talked to a few of the experts who are capable and willing to share  their thoughts and spread this knowledge via a column we decided to call Talking IP. The mandate now falls on Segun Sorunke, an Egba Chief and InfoTech Impresario. 

Segun has been a Resource Consultant to THE CYBERSCHUUL from day 1 in 2001 and talked IP to more than 80% of trainees whose programs cover the subject. Most trainees went away putting ‘Engr’. behind Segun’s name in their databases in spite of our not making such a claim anywhere we might have profiled the Egba Chief. 

In accepting to do the column and to beat the second best but equally eminent contact on the subject, Segun had said ‘In the column, I shall attempt to generate discussions on IP related subjects, and while we may unavoidably be technical in certain respects, we will work to make the column readable without being pedestrian, since it is essentially targeted at IT professionals and enthusiasts, particularly people specialising in IP work’. That suits the CyberschuulNews mission in this effort and we shall publish Segun’s scripts unedited. 

In the present converged world of IT, a working knowledge of IP and other related protocols are essential, and hopefully, we will be able to generate discussions in this column, on emerging technologies in the IP world. We shall run the gamut from IP addressing to IP security; we shall explore emerging IPv6 technologies and the challenges of scaling from IPv4; we will look into routing architectures on the Internet while not forgetting to look under the hood of routing technologies. 

No discussion of IP technologies will be complete without due reference to, and discussion of, its “twin brother”, Transmission Control Protocol (TCP) of the famous TCP/IP protocol suite. While IP is fundamental to Internet addressing and routing, TCP is the king of Internet transport used by most Internet applications like electronic mail (email), file transfer, interactive Telnet, and web pages access via HTTP, to name just a few. 

There is a growing interest in IPv6, and the migration from IPv4 may be the most topical issue in the IP community in the coming months and years. In consideration of this, the next edition of Talking IP will take an introductory look at various aspects of IPv6 and lead on from there into other areas of interest. Even the worst procrastinators recognise the eventuality of migration to IPv6, given IPv4’s nearly depleted pool of network addresses. 

We welcome reactions to, and observations on, all aspects of IP and related topics that will appear in Talking IP and I hope that in the process, we all will come away with a better appreciation of the wonders of the Internet Protocol. 

Till then, whatever you are routing, be it voice or data or video, may it be successful!

 

To subscribe to Cyberschuulnews, 
send subscription request to
subscribe@cyberschuulnews.com
>>previous

 

Navigation Bar

 

Professional
Development

 

Advert Charge
Rate Card

 

Telecom
Gallery

 

Contact
Us