首页    期刊浏览 2025年04月24日 星期四
登录注册

文章基本信息

  • 标题:Networking in the new millennium
  • 作者:Apfel, Audrey
  • 期刊名称:Work Process Improvement Today
  • 印刷版ISSN:1073-2233
  • 出版年度:1998
  • 卷号:Jun 1998
  • 出版社:Association for Work Process Improvement

Networking in the new millennium

Apfel, Audrey

We discuss two trends that will characterize networks in the new millennium: the convergence of voice and data networking architectures and the stresses of network computing on networks.

VOICE AND DATA NETWORKS CONVERGE

The convergence of networking architectures is at a critical point: The dominance of Internet Protocol (IP) as the universal network layer is ensured, the growth in data traffic is dwarfing voice networking volumes, and voice/data network integration is beginning.

In the mid-1980s, IBM and AT&T were poised for what was thought to be the information technology (IT) conflict of the century. Both companies had acquired or developed technological invasions into each other's territory. AT&T had entered the computer business through its development of the 3B2 minicomputers and through its acquisition of the Unix PC and the PC 6300. IBM had acquired Rolm for its private branch exchange (PBX) systems and had a stake in satellite business systems to enter segments of the voice and transmission networking businesses. For those who lived through this period, the rapid changes and confusion were frightening. The clash of these titans around the convergence of voice and data networking was not a battle, but a comical set of unprofitable and inappropriate sorties into "enemy" territory. The time and technology were not ripe for a true functional convergence of voice and data networking, and neither AT&T nor IBM clearly understood the needs of enterprises and neither company had effective ways to integrate systems where they made sense.

However, the market may now be ready for network convergence for several reasons. First, a universal data networking architecture now exists that is based on IP. Today, neither the transport of IP data nor the definition of internetworking architecture is controlled by any vendor. Second, the power of microprocessor technology has increased by more than three orders of magnitude in the past 15 years. Desktop and server environments have standardized into Windows, Windows NT and Unix. In addition, enterprises' need to transport large quantities of non-voice information has created a situation where voice systems could be grafted onto data networks rather than the other way around. Therefore, in the next five years, to gain operational efficiency and provide expanded business opportunities, enterprises will be able to merge voice and data systems into one information handling network. When this occurs, enterprises will be confronted with the overarching question: "How will the convergence of IT, telecom and data networking technology force changes in enterprises' planning, acquisition and operation of networks and network-based applications?" We expect the convergence will not occur uniformly, and the nature and rate of convergence will have an impact on the cost of operations for enterprises that will be adding multimedia-aware applications or trying to take advantage of "deals" in transport technologies to reduce their telecom expenses.

We believe that a conflict will occur between the traditional internetworking vendors (e.g., Cisco Systems) and the traditional transport/ voice vendors (e.g., Lucent Technologies and Nortel) in three areas: carrier infrastructures, private-network infrastructures and the access component.

The private network environment is strongly driven by the nature of desktop and server networking. Traditionally, these are sharedmedia datagram-driven networks. Even where asynchronous transfer mode (ATM) systems are deployed, the ATM system is molded to appear to be a datagram network. Within a carrier, circuit-switched technologies have traditionally been employed but, because of the influence of the Internet, even carrier networks now contain datagram components. The carriers must have a strategy for billing for services before new technologies can be deployed; unlike private networks, where technologies can be deployed for the sole purpose of enabling communications.

Therefore, the "access battleground" is at the nexus of the conflict between the "Ciscos" and the "Lucents," but whether the nature of the converged network at the access point will align more closely with the internetworkers or the traditional carrier suppliers cannot be determined at this time.

NETWORK COMPUTING PLACES INCREASED STRESS ON NETWORKS

Network computing has offered enterprises the promise of a lower total cost of ownership, faster application deployment and applications availability to wider groups of users compared to traditional client/server architectures. Although the industry is buzzing with news about new desktop devices-whether Java Network Computers, Windows Based Terminals or another product-we believe the emphasis on the desktop is largely misplaced. (For a discussion of the differences between network computing and network computers, two terms that are often incorrectly used interchangeably, please see the sidebar, "Network Computing vs. Network Computers.")

While industry leaders debate the merits of network computing architectures or the use of new desktop devices, enterprises must prepare for the effect on their networks. Stresses will be found across all network environments-both from the actual movement of network computing executables across the network to the network behavior of the application after it begins to execute.

Distributed network computing projects that do not consider network constraints will fail. We estimate that 80 percent of wide-area-- network- (WAN-) and network-computing-- developed applications will fail to meet enterprise expectations because of network constraints. The underlying enterprise assumption has wrongly been that the current network is reliable enough, has high-enough bandwidth and low-enough latency to support a networkcentric computing model on any device (including PCs). This is not true, for the following reasons:

First, a purely LAN environment consisting of 10-Mbit or 100-Mbit local-area network (LAN) segments connected using routers can usually support network computing in terms of sufficient bandwidth and latency. However, since offline work is nearly impossible with the current network computing architecture, enterprices need to create a highly fault-tolerant, highly available network. With PCs that have local applications, network outages do not necessarily stop employees from working. Using network computers, LAN infrastructure improvements--such as the deployment of alternate routes--need to be made to assure users of a comparable level of availability.

Second, network computing can be deployed in campus environments that include LANs as well as campus backbones. Older backbones based on Fiber Distributed Data Interface (FDDI) or Token Ring technology may not be able to support the increases in network traffic that can result from a shift to network computing. Although average work loads on many LANs may be less than 10 percent, peak traffic hours can load networks to more than 80 percent of capacity, and connectionless networks rarely recover quickly from such indignities. Enterprises that are planning to deploy network computers must move to a high-speed switched architecture and resist that urge to radically recentralize servers in a small set of data centers unless their networks are designed to keep pace with the new devices. Keeping the servers close to uesr populations who are executing network computing applicatoins whill also help ease enterprises' traffic management burdens. The more centralized the enterprise's servers, the greater the need for high-speed campus backbone networks. By 2002, caching technology will be built into network switches to simplify the network design and limit peak loads caused by network computers (NCs) and network computing applications (0.8 probability).

Third, enterprises can attempt to deploy network computing applications across WANs, but current private wide-area data networks are not ready to support network computing. This is because available, affordable WAN bandwidth is typically one-tenth to one-hundredth of LAN bandwidth. The latency over the WAN data network is affected not only by one enterprise's network traffic, but by other clients of the service provider as well. Although design and analysis tools for WAN networks are available, they are not widely used and their successful deployment requires substantial retraining of networking professionals. Design of new applications using a network computing model will mandate the use of distributed servers or application caching through 2002 because of insufficient WAN bandwidth (0.9 probability). In addition, to manage WAN quality of service effectively, enterprises will have to adopt a computer tool approach to network design and more tightly couple applications development to their networks.

Leading-edge enterprises deploying network computing applications will need to shift their resources in two directions: to allocate more budget resources to increase network bandwidth and to increase the amount of network support staff that will manage the now hypercritical network. In addition, network planners must use network design tools to avoid deploying applications that cannot be supported over the WAN. Enterprises will need to adopt new approaches (e.g., caching) to resolve the network stresses that will appear when these new applications are adopted.

Audrey Apfel, vice president and research director at GartnerGroup, has 17 years of experience in networking and data communications. Audrey directs the research for Gartner Group's Local Area Networking service, and her re-N search focus is on the network infrastructure and protocols needed for intranet/Internet communications and network computing.

Before joining Gartner Group, Audrey was af systems integration specialist at IBM, working with clients in the telecommunications and

utility industries in the design and implementation of their LAN and WAN internetworks. Prior to this, she served in various IBM business planning, technical support and system development organizations. Audrey started her career at Bell Laboratories, designing and supporting new network architectures.

Audrey holds a bachelor's degree in computer science from Queens College and a master's degree from New York University. Jay Pultz, research director for GartnerGroup's Network Technologies and Strategies service, focuses on wide-area network design.

In his 25 years in the telecommunications industry, he has served in a wide variety of roles: as a corporate network manager (AIG/Chemical Bank), business developer/ strategic planner (GE/RCA), consultant (Booz, Allen) and systems engineer (Bell Labs). His career spans the major technologies and functions that comprise telecommunications.

Jay holds both a bachelor's and a master's degree in electrical engineering; he also holds a master's degree in business administration. Mike Zboray, vice president and research director at GartnerGroup, has 18 years of experience in networking and communications.

Prior to joining GartnerGroup, he was director of marketing at Ascom Timeplex, responsible for data networking products and network management systems. Mike began his career at AT&T Bell Laboratories, where he developed product specifications and architectures in areas such as fast-packet switching, local-area networks and protocol development.

Mike holds a bachelor's degree in engineering from Stevens Institute of Technology, a master's degree in electrical engineering from Rutgers University and a professional degree in electrical engineering from Columbia University.

Copyright Association for Work Process Improvement Jun 1998
Provided by ProQuest Information and Learning Company. All rights Reserved

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有