DEFINITION AND OVERVIEW
Definition
Dynamic synchronous transfer mode (DTM) is an exciting networking technology. The idea behind it is to provide high-speed networking with top-quality transmissions and the ability to adapt the bandwidth to traffic variations quickly. DTM is designed to be used in integrated service networks for both distribution and one-to-one communication. It can be used directly for application-to-application communication or as a carrier for higher-layer protocols such as Internet protocol (IP).
DTM, Dynamic synchronous Transfer Mode, is a broadband network architecture based on circuit switching augmented with dynamic reallocation of time slots. DTM provides a service based on multicast, multirate channels with short set-up delay. DTM supports applications with real-time QoS requirements as well as applications characterized by bursty, asynchronous traffic
Overview
This tutorial explores the development of DTM in light of the demand for network-transfer capacity. DTM combines the two basic technologies used to build high-capacity networks—circuit and packet switching—and therefore offers many advantages. It also provides several service-access solutions to city networks, enterprises, residential and small offices, content providers, video production networks, and mobile network operators.
WHY DTM?
Over the last few years, the demand for network-transfer capacity has increased at an exponential rate. The impact of the Internet; the introduction of network services such as video and multimedia that require real-time support and multicast; and the globalization of network traffic enhance the need for cost-efficient networking solutions with support for real-time traffic and for the transmission of integrated data, both audio and video. At the same time, the transmission capacity of optical fibers is today growing significantly faster than the processing capacity of computers. Traditionally, the transmission capacity of the network links has been the main bottleneck in communication systems. Most existing network techniques are therefore designed to use available link capacity as efficiently as possible with the support of large network buffers and elaborate data processing at switch points and interfaces. However, with the large amount of data-transfer capacity offered today by fiber networks, a new bottleneck problem is caused by processing and buffering at switch and access points on the network. This problem has created a need for networking protocols that are not based on computer and storage capacity at the nodes but that instead limit complex operations to minimize processing on the nodes and maximize transmission capacity.
Against this background, the DTM protocol was developed. DTM is designed to increase the use of fiber's transmission capacity and to provide support for real-time broadband traffic and multicasting. It is also designed to change the distribution of resources to the network nodes dynamically, based on changes in transfer-capacity demand.
DTM BASICS
CIRCUIT SWITCHING vs. PACKET SWITCHING
In principle, two basic technologies are used for building high-capacity networks: circuit switching and packet switching. In circuit-switched networks, network resources are reserved all the way from sender to receiver before the start of the transfer, thereby creating a circuit. The resources are dedicated to the circuit during the whole transfer. Control signaling and payload data transfers are separated in circuit-switched networks. Processing of control information and control signaling such as routing is performed mainly at circuit setup and termination. Consequently, the transfer of payload data within the circuit does not contain any overhead in the form of headers or the like. Traditional voice telephone service is an example of circuit switching.
Circuit-Switched Networks
An advantage of circuit-switched networks is that they allow for large amounts of data to be transferred with guaranteed transmission capacity, thus providing support for real-time traffic. A disadvantage of circuit switching, however, is that if connections are short-lived—when transferring short messages, for example—the setup delay may represent a large part of the total connection time, thus reducing the network's capacity. Moreover, reserved resources cannot be used by any other users even if the circuit is inactive, which may further reduce link utilization.
Packet-Switched Networks
Packet switching was developed to cope more effectively with the data-transmission limitations of the circuit-switched networks during bursts of random traffic. In packet switching, a data stream is divided into standardized packets. Each contains address, size, sequence, and error-checking information, in addition to the payload data. The packets are then sent through the network, where specific packet switches or routers sort and direct each single packet.
Packet-switched networks are based either on connectionless or connection-oriented technology. In connectionless technology, such as IP, packets are treated independently of each other inside the network, because complete information concerning the packet destination is contained in each packet. This means that packet order is not always preserved, because packets destined for the same receiver may take different paths through the network. In connection-oriented technology such as asynchronous transfer mode (ATM), a path through the network—often referred to as a logical channel or virtual circuit—is established when data transfer begins. Each packet header then contains a channel identifier that is used at the nodes to guide each packet to the correct destination. In many aspects, a packet-switched network is a network of queues. Each network node contains queues where incoming packets are queued before they are sent out on an outgoing link. If the rate at which packets arrive at a switch point exceeds the rate at which packets can be transmitted, the queues grow. This happens, for example, if packets from several incoming links have the same destination link. The queuing causes delay, and if the queues overflow, packets are lost, which is called congestion. Loss of data generally causes retransmissions that may either add to the congestion or result in less-effective utilization of the network. The ability to support real-time traffic in packet-switched networks thus calls for advanced control mechanisms for buffer handling and direction. As a result, the complexity and necessary ability to process information, and therefore the need for computer power, increases sharply when striving for high transmission capacity.
THE DTM ADVANTAGE:
COMBINING SYNCHRONOUS AND ASYNCHRONOUS MEDIA ACCESS SCHEMES
In view of the above, DTM was developed in an effort to combine the simple, nonblocking, real-time traffic-supporting properties of circuit-switching technology with the dynamic resource-handling properties of packet-switching technology. Combining the advantages of synchronous and asynchronous media-access schemes, DTM forms a transport-network architecture that enables high transfer capacity with dynamic allocation of resources.
As will be shown in the following sections, DTM is fundamentally a circuit-switched, time division multiplexing (TDM) scheme, and, like other such schemes, it guarantees each host a certain bandwidth and uses a large fraction of available bandwidth for effective payload data transfer. In common with asynchronous schemes such as ATM, DTM supports dynamic reallocation of bandwidth between hosts. This means that the network can adapt to variations in the traffic and divide its bandwidth between nodes according to demand.
No comments:
Post a Comment