Thursday, March 16, 2023

Unit-III Client/Server Network

 Class: MSc(SE)SY Unit III Sub: Client Server Technology 

Unit-III 

Client/Server Network 

connectivity, communication interface technology, Interposes communication, wide area network technologies, network topologies (Token Ring, Ethernet, FDDI, CDDI) network management, Client-server system development: Software, Client–Server System Hardware: Network Acquisition, PC-level processing unit, Macintosh, notebooks, pen, UNIX workstation, x-terminals, server hardware



Communications Interface Technology  

Connectivity and interoperability between the client workstation and the server are achieved  through a combination of physical cables and devices, and software that implements  communication protocols.  

LAN Cabling  

One of the most important and most overlooked parts of LAN implementation today is the  physical cabling plant. A corporation's investment in cabling is significant. For most though,  it is viewed strictly as a tactical operation, a necessary expense. Implementation costs are too  high, and maintenance is a nonbudgeted, nonexistent process. The results of this  shortsightedness will be seen in real dollars through the life of the technology. Studies have  shown that over 65 percent of all LAN downtime occurs at the physical layer.  

It is important to provide a platform to support robust LAN implementation, as well as a  system flexible enough to incorporate rapid changes in technology. The trend is to  standardize LAN cabling design by implementing distributed star topologies around wiring  closets, with fiber between wiring closets. Desktop bandwidth requirements can be handled  by copper (including CDDI) for several years to come; however, fiber between wiring closets  will handle the additional bandwidth requirements of a backbone or switch-to-switch  configuration.  

Obviously, fiber to the desktop will provide extensive long-term capabilities; however,  because of the electronics required to support various access methods in use today, the initial  cost is significant. As recommended, the design will provide support for Ethernet, 4M and  16M Token Ring, FDDI, and future ATM LANs.  

Cabling standards include RG-58 A/U coaxial cable (thin-wire 10Base2 Ethernet), IBM Type  1 (shielded, twisted pair for Token Ring), unshielded twisted pair (UTP for 10BaseT Ethernet  or Token Ring) and Fiber Distributed Data Interface (FDDI for 10BaseT or Token Ring).  Motorola has developed a wireless Ethernet LAN product—Altair—that uses 18-GHz  frequencies. NCR's WaveLAN provides low-speed wireless LAN support.  

Wireless LAN technology is useful and cost-effective when the cost of cable installation is  high. In old buildings or locations where equipment is frequently moved, the cost of running  cables may be excessive. In these instances wireless technology can provide an attractive 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

alternative. Motorola provides an implementation that uses standard Ethernet NICs  connecting a group of closely located workstations together with a transmitter. The  transmitter communicates with a receiver across the room to provide the workstation server  connection. Recent reductions in the cost of this technology make it attractive for those  applications where the cost of cabling is more than $250 per workstation.  

Wireless communication is somewhat slower than wired communication. Industry tests  indicate a performance level approximately one-half that of wired 10-Mbps UTP Ethernet.  NCR's alternative wireless technology, WaveLAN, is a slow-speed implementation using  proprietary communications protocols and hardware. It also is subject to interference by other  transmitters, such as remote control electronics, antitheft equipment, and point-of-sale  devices.  

Ethernet IEEE 802.3  

Ethernet is the most widely installed network topology today. Ethernet networks have a  maximum throughput of 10 Mbps. The first network interface cards (NICs) developed for  Ethernet were much cheaper than corresponding NICs developed by IBM for Token Ring.  Until recently, organizations who used non-IBM minicomputer and workstations equipment  had few options other than Ethernet. Even today in a heterogeneous environment, there are  computers for which only Ethernet NICs are available.  

The large market for Ethernet NICs and the complete definition of the specification have  allowed over 100 companies to produce these cards.3 Competition has reduced the price to  little more than $100 per unit.  

10BaseT Ethernet is a standard that enables the implementation of the Ethernet protocol over  telephone wires in a physical star configuration (compatible with phone wire installations).  Its robustness, ease of use, and low cost driven by hard competition have made 10BaseT the  most popular standards-based network topology. Its pervasiveness is unrivaled: In 1994, new  laptop computers will start to ship with 10BaseT built in. IBM is now fully committed to  support Ethernet across its product line.  

Token Ring IEEE 802.5  

IBM uses the Token Ring LAN protocol as the standard for connectivity in its products. In an  environment that is primarily IBM hardware and SNA connectivity, Token Ring is the  preferred LAN topology option. IBM's Token Ring implementation is a modified ring  configuration that provides a high degree of reliability since failure of a node does not affect  any other node. Only failure of the hub can affect more than one node. The hub isn't electric  and doesn't have moving parts to break; it is usually stored in a locked closet or other  physically secure area.  

Token Ring networks implement a wire transmission speed of 4 or 16 Mbps. Older NICs will  support only the 4-Mbps speed, but the newer ones support both speeds. IBM and Hewlett Packard have announced a technical alliance to establish a single 100Mbps standard for both  Token Ring and Ethernet networks. This technology, called 100VG-AnyLAN, will result in  low-cost, high-speed network adapter cards that can be used in PCs and servers running on  either Token Ring or Ethernet LANs. The first AnyLAN products are expected in early 1994  and will cost between $250 and $350 per port. IBM will be submitting a proposal to make the 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

100VG-AnyLAN technology a part of IEEE's 802.12 (or 100Base-VG) standard, which  currently includes only Ethernet. A draft IEEE standard for the technology is expected by  early 1994.  

100VG-AnyLAN is designed to operate over a variety of cabling, including unshielded  twisted pair (Categories 3, 4, or 5), shielded twisted pair, and FDDI.  

The entire LAN operates at the speed of the slowest NIC. Most of the vendors today,  including IBM and SynOptics, support 16 Mbps over unshielded twisted-pair cabling (UTP).  This is particularly important for organizations that are committed to UTP wiring and are  considering the use of the Token Ring topology.  

Fiber Distributed Data Interface  

The third prevalent access method for Local Area Networks is Fiber Distributed Data  Interface (FDDI). FDDI provides support for 100 Mbps over optical fiber, and offers  improved fault tolerance by implementing logical dual counter rotating rings. This is  

effectively running two LANs. The physical implementation of FDDI is in a star  configuration, and provides support for distances of up to 2 km between stations.  

FDDI is a next-generation access method. Although performance, capacity, and throughput  are assumed features, other advantages support the use of FDDI in high-performance  environments. FDDI's dual counter-rotating rings provide the inherent capability of end-node  fault tolerance. By use of dual homing hubs (the capability to have workstations and hubs  connected to other hubs for further fault tolerance), highly critical nodes such as servers or  routers can be physically attached to the ring in two distinct locations. Station Management  Technology (SMT) is the portion of the standard that provides ring configuration, fault isolation, and connection management. This is an important part of FDDI, because it delivers  tools and facilities that are desperately needed in other access method technologies.  

There are two primary applications for FDDI: first as a backbone technology for  interconnecting multiple LANs, and second, as a high-speed medium to the desktop where  bandwidth requirements justify it.  

Despite the rapid decrease in the cost of Token Ring and 10BaseT Ethernet cards, FDDI costs  have been decreasing at a faster rate. As Figure 5.2 illustrates, the cost of 100 Mbps capable  FDDI NICs reached $550 by the end of 1992 and is projected to reach $400 by 1995. The  costs of installation are dropping as preterminated cable reaches the market. Northern  Telecom is anticipating, with its FibreWorld products, a substantial increase in installed end user fiber driven by the bandwidth demands of multimedia and the availability requirements  of business critical applications.  

Copper Distributed Data Interface  

The original standards in the physical layer specified optical fiber support only. Many  vendors, however, have developed technology that enables FDDI to run over copper wiring.  Currently, there is an effort in the ANSI X3T9.5 committee to produce a standard for FDDI  over Shielded Twisted Pair (IBM compliant cable), as well as Data grade unshielded twisted  pair. Several vendors, including DEC, IBM, and SynOptics are shipping an implementation  that supports STP and UTP. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

Ethernet versus Token Ring versus FDDI  

The Ethernet technique works well when the cable is lightly loaded but, because of collisions  that occur when an attempt is made to put data onto a busy cable, the technique provides poor  performance when the LAN utilization exceeds 50 percent. To recover from the collisions,  the sender retries, which puts additional load on the network. Ethernet users avoid this  problem by creating subnets that divide the LAN users into smaller groups, thus keeping a  low utilization level.  

Despite the widespread implementation of Ethernet, Token Ring installations are growing at  a fast rate for client/server applications. IBM's commitment to Ethernet may slow this  success, because Token-Ring will always cost more than Ethernet.  

Figure 5.3 presents the results of a recent study of installation plans for Ethernet, Token Ring,  and FDDI. The analysis predicts a steady increase in planned Token Ring installations from  1988 until the installed base is equivalent in 1996. However, this analysis does not account  for the emergence of a powerful new technology which has entered the marketplace in 1993,  Asynchronous Mode, or ATM. It is likely that by 1996 ATM will dominate all new  installations and will gradually replace existing installations by 1999.  

As Figure 5.4. illustrates, Token Ring performance is slightly poorer on lightly loaded LANs  but shows linear degradation as the load increases, whereas Ethernet shows exponential  degradation after loading reaches 30 percent capacity.  

Figure 5.5 illustrates the interoperability possible today with routers from companies such as  Cisco, Proteon, Wellfleet, Timeplex, Network Systems, and 3-Com. Most large organizations  should provide support for the three different protocols and install LAN topologies similar to  

the one shown in Figure 5.5. Multiprotocol routers enable LAN topologies to be  interconnected.  

Asynchronous Transfer Mode (ATM)  

ATM has been chosen by CCITT as the basis for its Broadband Integrated Services Digital  Network (B-ISDN) services. In the USA, an ANSI-sponsored subcommittee also is  investigating ATM.  

The integrated support for all types of traffic is provided by the implementation of multiple  classes of service categorized as follows:  

Constant Bit Rate (CBR): connection-oriented with a timing relationship between the  source and destination, for applications such as 64 kbits voice or fixed bit rate video  

Variable Bit Rate (VBR): connection-oriented with a timing relationship between the source and destination, such as variable bit rate video and audio  

Bursty traffic: having no end-to-end timing relationship, such as computer data and  LAN-to-LAN 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

ATM's capability to make the "computing aywhere" concept a reality is made possible  because ATM eventually will be implemented seamlessly both in the LAN and in the WAN.  By providing a single network fabric for all applications, ATM also gives network managers  with the required flexibility to respond promptly to business change and new applications.  (See Figure 5.6.)  

Hubs 

One of the most important technologies in delivering LAN technology to mainstream  information system architecture is the intelligent hub. Recent enhancements in the  capabilities of intelligent hubs have changed the way LANs are designed. Hubs owe their  success to the efficiency and robustness of the 10BaseT protocol, which enables the  implementation of Ethernet in a star fashion over Unshielded Twisted Pair. Now commonly  used, hubs provide integrated support for the different standard topologies (such as Ethernet,  Token-Ring, and FDDI) over different types of cabling. By repeating or amplifying signals  where necessary, they enable the use of high-quality UTP cabling in virtually every situation.  

These intelligent hubs provide the necessary functionality to distribute a structured hardware  and software system throughout networks, serve as network integration and control points,  provide a single platform to support all LAN topologies, and deliver a foundation for  managing all the components of the network.  

There are three different types of hubs. Workgroup hubs support one LAN segment and are  packaged in a small footprint for small branch offices. Wiring closet hubs support multiple  LAN segments and topologies, include extensive management capabilities, and can house  internetworking modules such as routers or bridges. Network center hubs, at the high end,  support numerous LAN connections, have a high-speed backplane with flexible connectivity  options between LAN segments, and include fault tolerance features.  

Hubs have evolved to provide tremendous flexibility for the design of the physical LAN  topologies in large office buildings or plants. Various design strategies are now available.  

The distributed backbone strategy takes advantage of the capabilities of the wiring closet  hubs to bridge each LAN segment onto a shared backbone network. This method is effective  in large plants where distances are important and computing facilities can be distributed. (See  Figure 5.7.)  

The collapsed backbone strategy provides a cost-effective alternative that enables the  placement of all LAN servers in a single room and also enables the use of a single high performance server with multiple LAN attachments. This is particularly attractive because it  provides an environment for more effective LAN administration by a central group, with all  servers easily reachable. It also enables the use of high-capacity, fault-tolerant  internetworking devices to bridge all LAN segments to form an integrated network. (See  Figure 5.8.)  

Hubs are also an effective vehicle to put management intelligence throughout the LANs in a  corporation, allowing control and monitoring capabilities from a Network Management  Center. This is particularly important as LANs in branch offices become supported by a  central group. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

Internetworking Devices Bridges and Routers  

Internetworking devices enable the interconnection of multiple LANs in an integrated  network. This approach to networking is inevitably supplanting the terminal-to-host networks  as the LAN becomes the preferred connectivity platform to all personal, workgroup, or  corporate computing facilities.  

Bridges provide the means to connect two LANs together—in effect, to extend the size of the  LAN by dividing the traffic and enabling growth beyond the physical limitations of any one  topology. Bridges operate at the data link layer of the OSI model, which makes them  topology-specific. Thus, bridging can occur between identical topologies only (Ethernet-to Ethernet, Token Ring-to-Token Ring). Source-Route Transparent bridging, a technology that  enables bridging between Ethernet and Token-Ring LANs, is seldom used.  

Although bridges may cost less, some limitations must be noted. Forwarding of broadcast  packets can be detrimental to network performance. Bridges operate promiscuously,  forwarding packets as required. In a large internetwork, broadcasts from devices can  accumulate, effectively taking away available bandwidth and adding to network utilization.  "Broadcast storms" are rarely predictable, and can bring a network completely to a halt.  Complex network topologies are difficult to manage. Ethernet bridges implement a simple  decision logic that requires that only a single path to a destination be active. Thus, in complex  meshed topologies, redundant paths are made inoperative, a situation that rapidly becomes  ineffective as the network grows.  

Routers operate at the network layer of the OSI model. They provide the means to  intelligently route traffic addressed from one LAN to another. They support the transmission  of data between multiple standard LAN topologies. Routing capabilities and strategies are  inherent to each network protocol. IP can be routed through the OSPF routing algorithm,  which is different than the routing strategy for Novell's IPX/SPX protocol. Intelligent routers  can handle multiple protocols; most leading vendors carry products that can support mixes of  Ethernet, Token Ring, FDDI, and from 8 to 10 different protocols.  

Transmission Control Protocol/Internet Protocol  

Many organizations were unable to wait for the completion of the OSI middle-layer protocols  during the 1980s. Vendors and users adopted the Transmission Control Protocol/Internet  Protocol (TCP/IP), which was developed for the United States military Defense Advanced  Research Projects Agency (DARPA) ARPANET network. ARPANET was one of the first  layered communications networks and established the precedent for successful  implementation of technology isolation between functional components. Today, the Internet  is a worldwide interconnected network of universities, research, and commercial  establishments; it supports thirty million US users and fifty million worldwide users.  Additional networks are connected to the Internet every hour of the day. In fact growth is  now estimated at 15 percent per month. The momentum behind the Internet is tremendous.  

The TCP/IP protocol suite is now being used in many commercial applications. It is  particularly evident in internetworking between different LAN environments. TCP/IP is  specifically designed to handle communications through "networks of interconnected  networks." In fact, it has now become the de facto protocol for LAN-based Client/Server  connectivity and is supported on virtually every computing platform. More importantly, most 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

interprocess communications and development tools embed support for TCP/IP where  multiplatform interoperability is required. It is worth noting that IBM has followed this  growth and not only provides support for TCP/IP on all its platforms, but now enables the  transport of its own interoperability interfaces (such as CPIC, APPC) on TCP/IP.  

TCP/IP's Architecture  

The TCP/IP protocol suite is composed of the following components: a network protocol (IP)  and its routing logic, three transport protocols (TCP, UDP, and ICMP), and a series of  session, presentation and application services. The following sections highlight those of  interest.  

Internet Protocol  

IP represents the network layer and is equivalent to OSI's IP or X.25. A unique network  address is assigned to every system, whether the system is connected to a LAN or a WAN.  The system comes with its associated routing protocols and lower level functions such as  network-to-physical address resolution protocols (ARP). Commonly used routing protocols  include RIP, OSPF, IGRP, and Cisco's proprietary protocol. OSPF has been adopted by the  community to be the standards-based preferred protocol for large networks.  

Transport Protocols  

TCP provides Transport services over IP. It is connection-oriented, meaning it requires a  session to be set up between two parties to provide its services. It ensures end-to-end data  transmission, error recovery, ordering of data, and flow control. TCP provides the kind of  communications that users and programs expect to have in locally connected sessions.  

UDP provides connectionless transport services, and is used in very specific applications that  do not require end-to-end reliability such as that provided by TCP.  

Telnet  

Telnet is an application service that uses TCP. It provides terminal emulation services and  supports terminal-to-host connections over an internetwork. It is composed of two different  portions: a client entity that provides services to access hosts and a server portion that  provides services to be accessed by clients. Even workstation operating systems such as OS/2  and Windows can provide telnet server support, thus enabling a remote user to log onto the  workstation using this method.  

File Transfer Protocol (FTP)  

FTP uses TCP services to provide file transfer services to applications. FTP includes a client  and server portion. Server FTP listens for a session initiation request from client FTP. Files  may be transferred in either direction, and ASCII and binary file transfer is supported. FTP  provides a simple means to perform software distribution to hosts, servers, and workstations.  

Simple Network Management Protocol (SNMP) 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

SNMP provides intelligence and services to effectively manage an internetwork. It has been  widely adopted by hub, bridge, and router manufacturers as the preferred technology to  monitor and manage their devices.  

SNMP uses UDP to support communications between agents—intelligent software that runs  in the devices—and the manager, which runs in the management workstation. Two basic  forms of communications can occur: SNMP polling (in which the manager periodically asks  the agent to provide status and performance data) and trap generation (in which the agent  proactively notifies the manager that a change of status or an anomaly is occurring).  

Network File System (NFS)  

The NFS protocol enables the use of IP by servers to share disk space and files the same way  a Novell or LAN Manager network server does. It is useful in environments in which servers  are running different operating systems. However, it does not offer support for the same  administration facilities that a NetWare environment typically provides.  

Simple Mail Transfer Protocol (SMTP)  

SMTP uses TCP connections to transfer text-oriented electronic mail among users on the  same host or among hosts over the network. Developments are under way to adopt a standard  to add multimedia capabilities (MIME) to SMTP. Its use is widespread on the Internet, where  it enables any user to reach millions of users in universities, vendor organizations, standards  bodies, and so on. Most electronic mail systems today provide some form of SMTP gateway  to let users benefit from this overall connectivity.  

TCP/IP and Internetworks  

Interestingly, the interconnected LAN environment exhibits many of the same characteristics  found in the environment for which TCP/IP was designed. In particular  

Routing: Internetworks need support for routing; routing is very efficient in TCP/IP  environments with efficient protocols such as OSPF.  

Connections versus Connectionless: LAN activity includes both; the TCP/IP  protocol suite efficiently supports both within an integrated framework.  

Administrative Load Sensitivity: A LAN administrative support is usually limited;  contrary to IBM's SNA, TCP/IP environments contain a tremendous amount of  dynamic capabilities, in which devices and networks are dynamically discovered, and  routing tables are automatically maintained and synchronized.  

Networks of Networks: TCP/IP provides extreme flexibility as the administrative  approach to the management of federations of networks. Taking advantage of its  dynamic nature, it enables very independent management of parts of a network (if  appropriate).  

Vendor Products 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

One of the leading vendors providing TCP/IP support for heterogeneous LANs is FTP  Software of Wakefield, Massachusetts, which has developed the Clarkson Packet Drivers.  These drivers enable multiple protocols to share the same network adapter. This is  particularly useful, if not necessary, for workstations to take advantage of file and print  services of a NetWare server, while accessing a client/server application located on a UNIX  or Mainframe server.  

IBM and Digital both provide support for TCP/IP in all aspects of their products'  interoperability. Even IBM's LU6.2/APPC specification can now run over a TCP/IP network,  taking advantage of the ubiquitous nature of the protocol. TCP/IP is widely implemented, and  its market presence will continue to grow.  

Interprocess Communication  

At the top of the OSI model, interprocess communications (IPCs) define the format for  application-level interprocess communications. In the client/server model, there is always a  need for interprocess communications. IPCs take advantage of services provided by protocol  stacks such as TCP/IP, LU6.2, Decnet or Novell's IPX/SPX. In reality, a great deal of IPC is  involved in most client/server applications, even where it is not visible to the programmer.  For example, a programmer programming using ORACLE tools ends up generating code that  uses IPC capabilities embedded in SQL*net, which provide the communications between the  client application and the server.  

The use of IPC is inherent in multitasking operating environments. The various active tasks  operate independently and receive work requests and send responses through the appropriate  IPC protocols. To effectively implement client/server applications, IPCs are used that operate  equivalently between processes in a single machine or across machine boundaries on a LAN  or a WAN.  

IPCs should provide the following services:  

Protocol for coordinating sending and receiving of data between processes  

Queuing mechanism to enable data to be entered asynchronously and faster than it is  processed  

Support for many-to-one exchanges (a server dealing with many clients)  Network support, location independence, integrated security, and recovery  Remote procedure support to invoke a remote application service  

Support for complex data structures  

Standard programming language interface  

All these features should be implemented with little code and excellent performance.  Peer-to-Peer Protocols 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

A peer-to-peer protocol is a protocol that supports communications between equals. This type  of communication is required to synchronize the nodes involved in a client/server network  application and to pass work requests back and forth.  

Peer-to-peer protocols are the opposite of the traditional dumb terminal-to-host protocols.  The latter are hierarchical setups in which all communications are initiated by the host.  NetBIOS, APPC, and Named Pipes protocols all provide support for peer-to-peer processing.  

NetBIOS  

The Network Basic I/O System (NetBIOS) is an interface between the transport and session  OSI layers that was developed by IBM and Sytek in 1984 for PC connectivity. NetBIOS is  used by DOS and OS/2 and is commonly supported along with TCP/IP. Many newer UNIX  implementations include the NetBIOS interface under the name RFC to provide file server  support for DOS clients.  

NetBIOS is the de facto standard today for portable network applications because of its IBM  origins and its support for Ethernet, Token Ring, ARCnet, StarLAN, and serial port LANs,  and its IBM origins.  

The NetBIOS commands provide the following services:  

General: Reset, Status, Cancel, Alert, and Unlink. The general services provide  miscellaneous but essential administrative networking services.  

Name: Add, Add Group, Delete, and Find. The naming services provide the  capability to install a LAN adapter card with multiple logical names. Thus, a remote  adapter can be referred to by a logical name such as Hall Justice, R601 rather than its  burned-in address of X'1234567890123456'.  

Session: Call, Listen, Send, Chain Send, Send No-Ack, Receive, Receive Any, Hang  Up, and Status. Sessions provide a reliable logical connection service over which a  pair of network applications can exchange information. Each packet of information  that gets exchanged over a session is given a sequence number, through which it is  tracked and individually acknowledged. The packets are received in the order sent and  blocked into user messages. Duplicate packets are detected and discarded by the  sessions services. Session management adds approximately five percent overhead to  the line protocol.  

Datagram: Send, Send-Broadcast, Receive, and Receive-Broadcast. Datagrams  provide a simple but unreliable transmission service, with powerful broadcast  capabilities. Datagrams can be sent to a named location, to a selected group  (multicast) or to all locations on the network (broadcast). There is no  

acknowledgment or tracking of the datagram. Applications requiring a guarantee of  delivery and successful processing must devise their own schemes to support such  acknowledgment.  

Application Program-to-Program Communication 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

The application program-to-program communication (APPC) protocol provides the necessary  IPC support for peer-to-peer communications across an SNA network. APPC provides the  program verbs in support of the LU6.2 protocol. This protocol is implemented on all IBM  and many other vendor platforms. Unlike NetBIOS or Named Pipes, APPC provides the  LAN and WAN support to connect with an SNA network, that may interconnect many  networks.  

Standards for peer-to-peer processing have evolved and have been accepted by the industry.  IBM defined the LU6.2 protocol to support the handshaking necessary for cooperative  processing between intelligent processors. Most vendors provide direct support for LU6.2  protocols in their WAN and the OSI committees and have agreed to define the protocol as  part of the OSI standard for peer-to-peer applications. A recently quoted comment, "The U.S.  banking system would probably collapse if a bug were found in IBM's LU6.2," points out the  prevalence of this technology in highly reliable networked transaction environments.4  

Programmers have no need or right to work with LU6.2 directly. Even with the services  provided by APIs, such as APPC, the interface is unreasonably complex, and the  opportunities for misuse are substantial. Vendors such as PeerLogic offer excellent interface  products to enable programs to invoke the functions from COBOL or C. High-level  languages, such as Windows 4GL, access network transparency products such as Ingres Net  implemented in the client and server (or SQL*Net in Oracle's case).  

These network products basically map layers five and six of the OSI model, generate LU6.2  requests directly to access remote SQL tables, and invoke remote stored procedures. These  products include all the necessary code to handle error conditions, build parameter lists,  maintain multiple sessions, and in general remove the complexity from the sight of the  business application developer.  

The power of LU6.2 does not come without complexity. IBM has addressed this with the  definition of a Common Programmers Interface for Communications (CPI-C). Application  program-to-program communication (APPC) is the API used by application programmers to  invoke LU6.2 services. Nevertheless, a competent VTAM systems programmer must be  involved in establishing the connection between the LAN node and the SNA network. The  APPC verbs provide considerable application control and flexibility. Effective use of APPC  is achieved by use of application interface services that isolate the specifics of APPC from  the developer. These services should be built once and reused by all applications in an  installation.  

APPC supports conversational processes and so is inherently half-duplex in operation. The  use of parallel sessions provides the necessary capability to use the LAN/WAN connection  bandwidth effectively. In evaluating LU6.2 implementations from different platforms,  support for parallel sessions is an important evaluation criterion unless the message rate is  low.  

LU6.2 is the protocol of choice for peer-to-peer communications from a LAN into a WAN  when the integrity of the message is important. Two-phase commit protocols for database  update at distributed locations will use LU6.2 facilities to guarantee commitment of all or  

none of the updates. Because of LU6.2 support within DECNET and the OSI standards,  developers can provide message integrity in a multiplatform environment. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

Named Pipes  

Named Pipes is an IPC that supports peer-to-peer processing through the provision of two way communication between unrelated processes on the same machine or across the LAN.  No WAN support currently exists. Named Pipes are an OS/2 IPC. The server creates the pipe  and waits for clients to access it. A useful compatibility feature of Named Pipes supports  standard OS/2 file service commands for access. Multiple clients can use the same named  pipe concurrently. Named Pipes are easy to use, compatible with the file system, and provide  local and remote support. As such, they provide the IPC of choice for client/server software  that do not require the synchronization or WAN features of APPC.  

Named Pipes provide strong support for many-to-one IPCs. They take advantage of standard  OS/2 and UNIX scheduling and synchronization services. With minimal overhead, they  provide the following:  

A method of exchanging data and control information between different computers  Transparency of the interface to the network  

API calls that facilitate the use of remote procedure calls (RPCs)  

The use of an RPC across a named pipe is particularly powerful because it enables the  requester to format a request into the pipe with no knowledge of the location of the server.  The server is implemented transparently to the requester on "some" machine platform, and  the reply is returned in the pipe. This is a powerful facility that is very easy to use. Named  Pipes support should become widespread because Novell and OSF have both committed the  necessary threads support.  

One of the first client/server online transaction processing (OLTP) products on the market,  Ellipse, is independent of any communications method, although it requires networking  platforms to have some notion of sessions. One of the major reasons Cooperative Solutions  chose OS/2 and LAN Manager as the first Ellipse platform is OS/2 LAN Manager's Named  Pipes protocol, which supports sessions using threads within processes.  

Ellipse uses Named Pipes for both client/server and interprocess communications on the  server, typically, between the Ellipse application server and the database server, to save  machine instructions and potentially reduce network traffic. Ellipse enables client/server  

conversations to take place either between the Ellipse client process and the Ellipse server  process or between the Ellipse client process and the DBMS server, bypassing the Ellipse  server process. In most applications, clients will deal with the DBMS through the Ellipse  

server, which is designed to reduce the number of request-response round trips between  clients and servers by synchronizing matching sets of data in the client's working storage and  the server DBMS.  

Ellipse uses its sessions to establish conversations between clients and servers. The product  uses a named pipe to build each client connection to SQL Server. Ellipse uses a separate  process for Named Pipes links between the Ellipse server and the SQL Server product.  

Ellipse also uses sessions to perform other tasks. For example, it uses a named pipe to  emulate cursors in an SQL server database management system (DBMS). Cursors are a 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

handy way for a developer to step through a series of SQL statements in an application.  (Sybase doesn't have cursors.) Ellipse opens up Named Pipes to emulate this function,  simultaneously passing multiple SQL statements to the DBMS. An SQL server recognizes  only one named pipe per user, so Ellipse essentially manages the alternating of a main session  with secondary sessions.  

On the UNIX side, TCP/IP with the Sockets Libraries option appears to be the most popular  implementation. TCP/IP supports multiple sessions but only as individual processes.  Although UNIX implements low-overhead processes, there is still more overhead than  incurred by the use of threads. LAN Manager for UNIX is an option, but few organizations  are committed to using it yet.  

Windows 3.x client support is now provided with the same architecture as the OS/2  implementation. The Ellipse Windows client will emulate threads. The Windows client  requires an additional layer of applications flow-control logic to be built into the Ellipse  environment's Presentation Services. This additional layer will not be exposed to applications  developers, in the same way that Named Pipes were not exposed to the developers in the first  version of the product.  

The UNIX environment lacks support for threads in most commercial implementations.  Cooperative Solutions hasn't decided how to approach this problem. Certainly, the sooner  vendors adopt the Open Software Foundation's OSF/1 version of UNIX, which does support  threads, the easier it will be to port applications, such as Ellipse, to UNIX.  

The missing piece in UNIX thread support is the synchronization of multiple requests to the  pipe as a single unit of work across a WAN. There is no built-in support to back off the effect  of previous requests when a subsequent request fails or never gets invoked. This is the  scenario in which APPC should be used.  

Anonymous Pipes  

Anonymous pipes is an OS/2 facility that provides an IPC for parent and child  communications in a spawned-task multitasking environment. Parent tasks spawn child tasks  to perform asynchronous processing. It provides a memory-based, fixed-length circular  buffer, shared with the use of read and write handles. These handles are the OS/2 main  storage mechanism to control resource sharing. This is a high-performance means of  communication when the destruction or termination of a parent task necessitates the  termination of all children and in-progress work.  

Semaphores  

Interprocess synchronization is required whenever shared-resource processing is being used.  It defines the mechanisms to ensure that concurrent processes or threads do not interfere with  one another. Access to the shared resource must be serialized in an agreed upon manner.  Semaphores are the services used to provide this synchronization.  

Semaphores may use disk or D-RAM to store their status. The disk is the most reliable and  slowest but is necessary when operations must be backed out after failure and before restart.  D-RAM is faster but suffers from a loss of integrity when there is a system failure that causes 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

D-RAM to be refreshed on recovery. Many large operations use a combination of the two disk to record start and end and D-RAM to manage in-flight operations.  

Shared Memory  

Shared memory provides IPC when the memory is allocated in a named segment. Any  process that knows the named segment can share it. Each process is responsible for  implementing synchronization techniques to ensure integrity of updates. Tables are typically  implemented in this way to provide rapid access to information that is infrequently updated.  

Queues  

Queues provide IPC by enabling multiple processes to add information to a queue and a  single process to remove information. In this way, work requests can be generated and  performed asynchronously. Queues can operate within a machine or between machines across  a LAN or WAN. File servers use queues to collect data access requests from many clients.  

Dynamic Data Exchange  

Through a set of APIs, Windows and OS/2 provide calls that support the Dynamic Data  Exchange (DDE) protocol for message-based exchanges of data among applications. DDE  can be used to construct hot links between applications in which data can be fed from  window to window without interruption intervention. For example, a hot link can be created  between a 3270 screen session and a word processing document. Data is linked from the 3270  window into the word processing document. Whenever the key of the data in the screen  changes, the data linked into the document changes too. The key of the 3270 screen  transaction Account Number can be linked into a LAN database. As new account numbers  are added to the LAN database, new 3270 screen sessions are created, and the relevant  information is linked into the word processing document. This document then can be printed  to create the acknowledgment letter for the application.  

DDE supports warm links created so the server application notifies the client that the data has  changed and the client can issue an explicit request to receive it. This type of link is attractive when the volume of changes to the server data are so great that the client prefers not to be  burdened with the repetitive processing. If the server link ceases to exist at some point, use a  warm rather than hot link to ensure that the last data iteration is available.  

You can create request links to enable direct copy-and-paste operations between a server and  client without the need for an intermediate clipboard. No notification of change in data by the  server application is provided.  

You define execute links to cause the execution of one application to be controlled by  another. This provides an easy-to-use batch-processing capability.  

DDE provides powerful facilities to extend applications. These facilities, available to the  desktop user, considerably expand the opportunity for application enhancement by the user  owner. Organizations that wish to integrate desktop personal productivity tools into their  client/server applications should insist that all desktop products they acquire be DDE capable. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

Remote Procedure Calls  

Good programmers have developed modular code using structured techniques and subroutine  logic for years. Today, these subroutines should be stored "somewhere" and made available  to everyone with the right to use them. RPCs provide this capability; they standardize the way  programmers must write calls to remote procedures so that the procedures can recognize and  respond correctly.  

If an application issues a functional request and this request is embedded in an RPC, the  requested function can be located anywhere in the enterprise the caller is authorized to  access. Client/server connections for an RPC are established at the session level in the OSI  stack. Thus, the RPC facility provides for the invocation and execution of requests from  processors running different operating systems and using different hardware platforms from  the caller's. The standardized request form provides the capability for data and format  translation in and out. These standards are evolving and being adopted by the industry.  

Sun RPC, originally developed by Netwise, was the first major RPC implementation. It is the  most widely implemented and available RPC today. Sun includes this RPC as part of their  Open Network Computing (ONC) toolkit. ONC provides a suite of tools to support the  development of client/server applications.  

The Open Software Foundation (OSF) has selected the Hewlett-Packard (HP) and Apollo  RPC to be part of its distributed computing environment (DCE). This RPC—based on  Apollo's Network Computing System (NCS)—is now supported by Digital Equipment  Corporation, Microsoft, IBM, Locus Computing Corp., and Transarc. OSI also has proposed  a standard for RPC-like functions called Remote Operation Service (ROSE). The selection  by OSF likely will make the HP standard the de facto industry standard after 1994.  Organizations wishing to be compliant with the OSF direction should start to use this RPC  today.  

Organizations that want to build applications with the capability to use RPCs can create an  architecture as part of their systems development environment (SDE) to support the standard  RPC when it is available for their platform. All new development should include calls to the  RPC by way of a standard API developed for the organization. With a minimal investment in  such an API, the organization will be ready to take advantage of the power of their RPC as it  becomes generally available, with very little modification of applications required.  

When a very large number of processes are invoked through RPCs, performance will become  an issue and other forms of client/server connectivity must be considered. The preferred  method for high-performance IPC involves the use of peer-to-peer messaging. This is not the  store-and-forward messaging synonymous with e-mail but a process-to-process  communications with an expectation of rapid response (without the necessity of stopping  processing to await the result).  

The Mach UNIX implementation developed at Carnegie Mellon is the first significant  example of a message-based operating system. Its performance and functionality have been  very attractive for systems that require considerable interprocess communications. The NeXT  operating system takes advantage of this message-based IPC to implement an object-oriented  operating system. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

The advantage of this process-to-process communication is evident when processors are  involved in many simultaneous processes. It is evident how servers will use this capability;  however, the use in the client workstation, although important, is less clear. New client  applications that use object-level relationships between processes provide considerable  opportunity and need for this type of communication. For example, in a text-manipulation  application, parallel processes to support editing, hyphenation, pagination, indexing, and  workgroup computing may all be active on the client workstation. These various tasks must  operate asynchronously for the user to be effective.  

A second essential requirement is object-level linking. Each process must view the  information through a consistent model to avoid the need for constant conversion and  subsequent incompatibilities in the result.  

NeXTStep, the NeXT development environment and operating system, uses PostScript and  the Standard Generalized Markup Language (SGML) to provide a consistent user and  application view of textual information. IBM's peer-to-peer specification LU6.2 provides  support for parallel sessioning thus reducing much of the overhead associated with many  RPCs, that is, the establishment of a session for each request. IBM has licensed this  technology for use in its implementation of OSF/1.  

RPC technology is here and working, and should be part of every client/server  implementation. As we move into OLTP and extensive use of multitasking workgroup  environments, the use of message-based IPCs will be essential. DEC's implementation is  called DECmessageQ and is a part of its Application Control Architecture. The OSF Object  Management Group (OMG) has released a specification for an object request broker that  defines the messaging and RPC interface for heterogeneous operating systems and networks.  The OMG specification is based on several products already in the marketplace, specifically  HP's NewWave with Agents and the RPCs from HP and Sun. Organizations that want to  design applications to take advantage of these facilities as they become available can gain  considerable insight by analyzing the NewWave agent process. Microsoft has entered into an  agreement with HP to license this software for inclusion in Windows NT.  

Object Linking and Embedding  

OLE is designed to let users focus on data—including words, numbers, and graphics—rather  than on the software required to manipulate the data. A document becomes a collection of  objects, rather than a file; each object remembers the software that maintains it. Applications  that are OLE-capable provide an API that passes the description of the object to any other  application that requests the object.  

Wide Area Network Technologies  

WAN bandwidth for data communications is a critical issue. In terminal-to-host networks,  traffic generated by applications could be modeled, and the network would then be sized  accordingly, enabling effective use of the bandwidth. With LAN interconnections and  applications that enable users to transfer large files (such as through e-mail attachments) and  images, this modeling is much harder to perform.  

"Bandwidth-on-demand" is the paradigm behind these emerging technologies. Predictability  of applications requirements is a thing of the past. As application developers get tooled for 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

rapid application development and as system management facilities enable easy deployment  of these new applications, the lifecycle of network redesign and implementation is  dramatically shortened. In the short term, the changes are even more dramatic as the  migration from a host-centric environment to a distributed client/server environment prevents  the use of any past experience in "guessing" the actual network requirements.  

Network managers must cope with these changes by seeking those technologies that will let  them acquire bandwidth cost effectively while allowing flexibility to serve these new  applications. WAN services have recently emerged that address this issue by providing the  appropriate flexibility inherently required for these applications.  

Distance-insensitive pricing seems to emerge as virtual services are introduced. When one  takes into account the tremendous amount of excess capacity that the carriers have built into  their infrastructure, this is not as surprising as it would seem. This will enable users and  systems architects to become less sensitive to data and process placement when designing an  overall distributed computing environment.  

Frame Relay  

Frame Relay network services are contracted by selecting two components: an access line  and a committed information rate (CIR). This CIR speed is the actual guaranteed throughput  you pay for. However, Frame Relay networks enable you, for example, to exceed this  throughput at certain times to allow for efficient file transfers.  

Frame Relay networks are often qualified as virtual private networks. They share a public  infrastructure but implement virtual circuits between the senders and the receivers, similar to  actual circuits. It is therefore a connection-oriented network. Security is provided by defining  closed user groups, a feature that prevents devices from setting up virtual connections to  devices they are not authorized to access.  

Figure 5.10 illustrates a typical scenario for a frame relay implementation. This example is  being considered for use by the Los Angeles County courts for the ACTS project, as  described in Appendix A.  

Switched Multimegabit Data Service (SMDS)  

SMDS is a high-speed service based on cell relay technology, using the same 53-byte cell  transmission fabric as ATM. It also enables mixed data, voice, and video to share the same  network fabric. Available from selected RBOCs as a wide-area service, it supports high  speeds well over 1.5 Mbps, and up to 45 Mbps.  

SMDS differs from Frame Relay in that it is a connectionless service. Destinations and  throughput to those destination do not have to be predefined. Currently under trial by major  corporations, SMDS—at speeds that match current needs of customers—is a precursor to  ATM services.  

ATM in the Wide Area Network  

The many advantages of ATM were discussed earlier in the chapter. Although not available  as a service from the carriers, ATM will be soon be possible if built on private infrastructures. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

Private networks have traditionally been used in the United States for high-traffic networks  with interactive performance requirements. Canada and other parts of the world have more  commonly used public X.25 networks, for both economic and technical reasons. With the  installation of digital switching and fiber-optic communication lines, the telephone  companies now find themselves in a position of dramatic excess capacity. Figure 5.11  illustrates the cost per thousand bits of communication. What is interesting is not the unit  costs, which continue to decline, but the ratio of costs per unit when purchased in the various  packages. Notice that the cost per byte for a T1 circuit is less than 1/5 the cost of a 64-Kbps  circuit. In a T3 circuit package, the cost is 1/16.  

In reality, it costs the telephone company to provide the service, initiate the call, and bill for  it. There is no particular difference in the cost for distance and little in the cost for capacity.  British Telecom has recently started offering a service with distance-insensitive pricing.  

LANs provide a real opportunity to realize these savings. Every workstation on the LAN  shares access to the wide-area facilities through the router or bridge. If the router has access  to a T1 or T3 circuit, it can provide service on demand to any of the workstations on the  LAN. This means that a single workstation can use the entire T1 for the period needed to  transmit a document or file.  

As Figure 5.12 illustrates, this bandwidth becomes necessary if the transmission involves  electronic documents. The time to transmit a character screen image is only 0.3 seconds with  the 64-Kbps circuit. Therefore, increasing the performance of this transmission provides no  benefit. If the transmission is a single-page image, such as a fax, the time to transmit is 164  seconds. This is clearly not an interactive response. Using a T1 circuit, the time reduces to  only 5.9 seconds, and with a T3, to 0.2 seconds. If this image is in color, the times are 657  seconds compared to 23.5 and 0.8 seconds. In a client/server database application where the  answer set to a query might be 10M, the time to transmit is 1,562 seconds (compared to 55.8  and 1.99 seconds).  

When designing the architecture of the internetwork, it is important to take into account the  communications requirements. This is not just an issue of total traffic, but also of  instantaneous demand and user response requirements. ATM technologies will enable the use  of the same lines for voice, data, or video communications without preallocating exclusive  portions of the network to each application. 

Integrated Services Digital Network  

ISDN is a technology that enables digital communications to take place between two systems  in a manner similar to using dial-up lines. Connections are established over the public phone  network, but they provide throughput of up to 64 Kbps. ISDN has two basic components:  

B-Channel: These two channels (hence the name of 2B+D for basic rate ISDN)  provide communication services for either voice or data switched service. Data can be  transmitted in any communications protocol.  

D-Channel Signaling: This channel is used by the terminal equipment to control call  setup and disconnection. It is much more efficient than call control of a dial-up line;  the time required to set up a call is typically less than three seconds. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

ISDN Applications  

ISDN can provide high quality and performance services for remote access to a LAN.  Working from the field or at home through ISDN, a workstation user can operate at 64 Kbps  to the LAN rather than typical modem speeds of only 9.6 Kbps. Similarly, workstation-to host connectivity can be provided through ISDN at these speeds. Help desk support often  requires the remote help desk operator to take control of or share access with the user  workstation display. GUI applications transmit megabits of data to and from the monitor.  This is acceptable in the high-performance, directly connected implementation usually found  with a LAN attached workstation; but this transmission is slow over a communications link.  

Multimedia applications offer considerable promise for future use of ISDN. The capability to  simultaneously send information over the same connection enables a telephone conversation,  a video conference, and integrated workstation-to-workstation communications to proceed  concurrently. Faxes, graphics, and structured data all can be communicated and made  available for all participants in the conversation.  

Network Management  

When applications reside on a single central processor, the issues of network management  assume great importance but often can be addressed by attentive operations staff. With the  movement to client/server applications, processors may reside away from this attentiveness.  

If the data or application logic necessary to run the business resides at a location remote from  the "glass house" central computer room, these resources must be visible to some network  managers. The provision of a network control center (NCC) to manage all resources in a  distributed network is the major challenge facing most large organizations today. Figure 5.13  illustrates the various capabilities necessary to build this management support. The range of  services is much greater than services traditionally implemented in terminal connected host  applications. Many large organizations view this issue as the most significant obstacle to  successful rollout of client/server applications.  

the key layers in the management system architecture:  

1. Presentation describes the management console environment and the tools used there.  

2. Reduction refers to distributed intelligence, which acts as an intermediary for the  network management interface. Reduction enables information to be consolidated and  filtered, allowing the presentation service to delegate tasks through the use of an  emerging distributed program services such as RPC, DME, or SMP. These provide  the following benefits: response to problems and alerts can be executed locally to  reduce latency and maintain availability, distributed intelligence can better serve a  local environment—because smaller environments tend to be more homogeneous and  such intelligence can be streamlined to reflect local requirements, scalability with  regards to geography and political or departmental boundaries allows for local control  and bandwidth optimization, reduction in management traffic overhead (because  SNMP is a polling protocol), and placing distributed facilities locally reduced the  amount of polling over a more expensive wide-area internet. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

3. Gathering of information is done by device agents. Probably the greatest investment  in establishing a base for the management network is through device management.  Device management can represent the smallest piece of information, which may be  insignificant in the overall picture. However, as network management tools evolve,  the end result will be only as good as the information provided. These device agents  provide detailed diagnostics, detailed statistics and precise control  

OSF defines many of the most significant architectural components for client/server  computing. The OSF selection of HP's Openview, combined with IBM's commitment to  OSF's DME with its Netview/6000 product, ensures that we will see a dominant standard for  the provision of network management services. There are five key OSI management areas:  

Fault management  

Performance management  

Inventory management  

Accounting management  

Configuration management  

The current state of distributed network and systems management illustrate serious  weaknesses when compared to the management facilities available in the mainframe world  today. With the adoption of Openview as the standard platform and including products such  as Remedy Corporation's Action Request System for problem tracking/process automation,  Tivoli's framework for system administration, management and security, and support  applications from vendors such as Openvision, it is possible to implement effective  distributed network and systems management today. The required integration will create  more difficulties than mainframe operations might.  

Standards organizations and the major vendors provide their own solution to this challenge.  There is considerable truth in the axiom that "the person who controls the network controls  the business." The selection of the correct management architecture for an organization is not  straightforward and requires a careful analysis of the existing and planned infrastructure.  Voice, data, application, video, and other nonstructured data needs must all be considered. 

Hardware/Network Acquisition  

Before selecting client hardware for end users, most organizations should define standards for  classes of users. This set of standards simplifies the selection of the appropriate client  hardware for a user and allows buyers to arrange purchasing agreements to gain volume  pricing discounts.  

There are a number of issues to consider when selecting the client workstation, including  processor type, coprocessor capability, internal bus structure, size of the base unit, and so on.  However, of these issues, one of the most overlooked regarding client/server applications is  the use of a GUI. GUI applications require VGA or better screen drivers. Screens, larger than  the 15-inch PC standard, are required for users who normally display several active windows  at one time; the more windows active on-screen, the larger the monitor viewing area 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

requirements. The use of image, graphics, or full-motion video requires a large screen with  very high resolution for regular usage. It is important to remember that productivity is  dramatically affected by inability to easily read the screen all day. Inappropriate resolution  will lead to fatigue and inefficiency.  

The enterprise on the desk requires that adequate bandwidth be available to provide  responsiveness to the desktop user. If regular access to off LAN data is required, a router  based internetworking implementation will be required. If only occasional off LAN access is  required, bridges can be used. Routers provide the additional advantage of support for  multiprotocol internetworking. This is frequently necessary as organizations install 10BaseT  Ethernet into an existing Token Ring environment. Fast Ethernet and FDDI are becoming  more prevalent as multimedia applications are delivered.  

PC-Level Processing Units  

Client/server applications vary considerably in their client processing requirements and their  I/O demands on the client processor and server. In general, clients that support protected mode addressing should be purchased. This implies the use of 32-bit processors—perhaps  with a 16-bit I/O bus if the I/O requirement is low. Low means the client isn't required to  send and receive large amounts of data, such as images, which could be 100K bytes or larger,  on a constant basis.  

As multiwindowed and multimedia applications become prevalent during 1994, many  applications will require the bandwidth only provided by a 32-bit I/O bus using VESA VL bus or Intel PCI technology. Windowed applications require considerable processing power  to provide adequate response levels. The introduction of application integration via DCE,  OLE, and DOE significantly increases the process ing requirements at the desktop. The  recommended minimum configuration for desktop processors has the processing capacity of  a 33Mhz Intel 486SX. By early 1995, the minimum requirement will be the processing  capacity of a 50Mhz Intel 486DX or a 33Mhz Intel Pentium.  

Macintosh  

The Mac System 7 operating system is visually intuitive and provides the best productivity  when response time to GUI operations is secondary. The Motorola 68040, 8Mbytes RAM,  120Mbyte disk is recommended. By early 1995, the availability of PowerPC technology and  the integration of System 7 with AIX and Windows means that users will need considerably  more processor capacity. Fortunately, the PowerPC will provide this for the same or lower  cost than the existing Motorola technology.  

Notebooks  

Users working remotely on a regular basis may find that a notebook computer best satisfies  their requirements. The notebook computer is the fastest growing market today. The current  technology in this area is available for Intel PC, Apple Macintosh, and SPARC UNIX  processors. Because notebooks are "miniaturized," their disk drives are often not comparable  to full-size desktop units. Thus, the relatively slower speed of disk I/O on notebooks makes it  preferable to install extra RAM, creating "virtual" disk drives. 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

A minimal configuration is a processor with the equivalent processing power of a 33Mhz  Intel 486SX, 8mbytes of RAM and 140Mbytes of disk. In addition, the notebook with battery  should weigh less than seven pounds and have a battery life of three hours. Color support is  an option during 1994 but will be mandatory for all by 1995. In addition, if the application  will run a remote GUI, it is desirable to install software to compress the GUI and V.32  modem communications at 9600 bps or V.32bis at 14400 bps, employing V.42 and V.42bis  compression, respectively. The effective throughput is two to three times the baud rate  because of compression. The use of MNP4 and V.42 or MNP5 and V.42bis error correction  enables these speeds to work effectively even during noisy line conditions. The introduction  of PCMCIA technology, credit card size modems, and flash memory are available to upgrade  the notebook.  

Pen  

Pen-based clients provide the capability to operate applications using a pen to point and select  or write without the need for a mouse or keyboard. Frequently, they are used for verification,  selection, and inquiry applications where selection lists are available. Developers using this  technology use object-oriented software techniques that are RAM-intensive.  

The introduction of personal digital assistant (PDA) technology in 1993 has opened the  market to pocket size computing. During 1994, this technology will mature with increased  storage capacity through cheaper, denser RAM and flash memory technology. The screen  resolution will improve, and applications will be developed that are not dependent upon  cursive writing recognition.  

The PDA market is price-sensitive to a $500-$1000 device with the capability to run a  Windows-like operating environment in 4MB of RAM, a 20Mhz Intel 486SX processor, and  8MB of flash memory. Devices with this capability will appear in 1994, and significant  applications beyond personal diaries will be in use. During 1995, 16MB of RAM and 32MB  of flash memory will begin to appear, enabling these devices to reach a mass market beyond  1996. In combination with wireless technology advances, this will become the personal  information source for electronic news, magazines, books, and so on. Your electronic  Personal Wall Street Journal will follow you for access on your PDA.  

UNIX Workstation  

UNIX client workstations are used when the client processing needs are intensive. In many  applications requiring UNIX, X-terminals connected to a UNIX presentation server will be  the clients of choice. A UNIX client workstation will then have more processing power than a  PC client.  

The introduction of software from SunSoft, Insignia Solutions, and Locus Computing that  supports the execution of DOS and Windows 3.x applications in a UNIX window makes the  UNIX desktop available to users requiring software from both environments. The PowerPC  and Sparc technologies will battle for this marketplace. Both are expected to gain market  share from Intel during and after 1994.  

X-Terminals 

Class: MSc(SE)SY Unit III Sub: Client Server Technology 

X-terminals provide the capability to perform only presentation services at the workstation.  Processing services are provided by another UNIX, Windows 3.x, NT, OS/2 2.x, or VMS  server. Database, communications, and applications services are provided by the same or  other servers in the network. The minimum memory configuration requirement for an X terminal used in a client/server application is 4-8 Mbytes RAM, depending on the number of  open windows.  

Server Hardware  

Server requirements vary according to the complexity of the application and the distribution  of work. Because servers are multiuser devices, the number of active users is also a major  sizing factor. Servers that provide for 32-bit preemptive multitasking operating systems with  storage protection are preferred in the multiuser environment.  

Intel-based tower PCs and Symmetric Multi-Processors (SMPs) are commonly used for  workgroup LANs with file and application service requirements. Most PC vendors provide a  66Mhz Intel 486DX or Intel Pentium for this market in 1994. SMP products are provided by  vendors such as IBM, Compaq, and NetFrame. Traditional UNIX vendors, such as Sun, HP,  IBM, and Pyramid provide server hardware for applications requiring UNIX stability and  capacity for database and application servers and large workgroup file services.  

The SMP products, in conjunction with RAID disk technology, can be configured to provide  mainframe level reliability for client/server applications. It is critical that the server be  architected as part of the systems management support strategy to achieve this reliability. 


No comments:

Post a Comment