I
INTRODUCTION
Electric Lighting, illumination by means of any of a number of devices that convert electrical energy into light. The types of electric lighting devices most commonly used are the incandescent lamp, the fluorescent lamp, the various types of arc and electric-discharge vapor lamps (see Electric Arc), and light-emitting diodes.
II
TECHNOLOGY OF ELECTRIC LIGHTING
If an electric current is passed through any conductor other than a perfect one, a certain amount of energy is expended that appears as heat in the conductor (see Conductor, Electrical). Inasmuch as any heated body will give off a certain amount of light at temperatures above 525° C (977° F), a conductor heated above that temperature by an electric current will act as a light source. The incandescent lamp consists of a filament of a material with a high melting point sealed inside a glass bulb from which the air has been evacuated, or which is filled with an inert gas. Filaments with high melting points must be used because the proportion of light energy to heat energy radiated by the filament rises as the temperature increases, and the most efficient light source is obtained at the highest filament temperature. Carbon filaments were employed in the first practical incandescent lamps, but modern lamps are universally made with filaments of fine tungsten wire (see Tungsten), which has a melting point of 3422° C (6192° F). The filament must be enclosed in either a vacuum or an inert atmosphere, otherwise the heated filament would react chemically with the surrounding atmosphere. Using an inert gas instead of a vacuum in incandescent lamps has the advantage of slowing evaporation of the filament, thus prolonging the life of the lamp. Most modern incandescent lamps are filled with a mixture of argon or krypton and a small amount of nitrogen.
Radical changes in incandescent lamp design have resulted from substituting compact fused-quartz glass tubes for glass bulbs. These new, stronger-walled bulbs have made tungsten-halogen lamps, a variation of the incandescent lamp, possible. Tungsten-halogen lamps use the regenerative cycle of halogens to return evaporated tungsten particles to the filament, thus extending the life of the bulb. The high temperatures required to take advantage of halogen’s regenerative cycle made this idea impossible until the walls of the bulb could be made stronger by the introduction of quartz. These bulbs are filled with a mixture of argon and halogen (usually bromine) gases along with a small amount of nitrogen.
III
TYPES OF LAMPS
Electric-discharge lamps depend on the ionization and the resulting electric discharge in vapors or gases at low pressures if an electric current is passed through them (see Ion). Representative examples of these types of devices are the mercury-vapor arc lamp, which gives an intense blue-green light and is used for photographic and roadway illumination, and the neon lamp, which is employed for decorative sign and display lighting. In newer electric-discharge lamps, other metals are added to mercury and phosphor on the enclosing bulbs to improve color and efficacy. Glasslike, translucent ceramic tubes have led to high-pressure sodium vapor lamps of unprecedented lighting power.
The fluorescent lamp is another type of electric-discharge device used for general-purpose illumination. It is a low-pressure mercury vapor lamp contained in a glass tube, which is coated on the inside with a fluorescent material known as phosphor. The radiation in the arc of the vapor lamp causes the phosphor to become fluorescent. Much of the radiation from the arc is invisible ultraviolet light, but this radiation is changed to visible light if it excites the phosphor. Fluorescent lamps have several important advantages. By choosing the proper type of phosphor, the light from such lamps can be made to approximate the quality of daylight. In addition, the efficiency of the fluorescent lamp is high. A fluorescent tube taking 40 watts of energy produces as much light as a 150-watt incandescent bulb. Because of this illuminating power, fluorescent lamps produce less heat than incandescent bulbs for comparable light production.
One advance in the field of electric lighting is the use of electroluminescence, known commonly as panel lighting. In panel lighting, particles of phosphor are suspended in a thin layer of nonconducting material such as plastic. This layer is sandwiched between two plate conductors, one of which is a translucent substance, such as glass, coated on the inside with a thin film of tin oxide. With the two conductors acting as electrodes, an alternating current is passed through the phosphor, causing it to luminesce. Luminescent panels may serve a variety of purposes—for example, to illuminate clock and radio dials, to outline the risers in staircases, and to provide luminous walls. The use of panel lighting is restricted, however, because the current requirements for large installations are excessive. See Luminescence.
A number of different kinds of electric lamps have been developed for such special purposes as photography and floodlighting. These bulbs are generally shaped to act as reflectors when coated with an aluminum mirror (see Optics). One such lamp is the photoflood bulb, an incandescent lamp that is operated at a temperature higher than normal to obtain greater light output. The life of these bulbs is limited to 2 or 3 hours, as opposed to that of the ordinary incandescent bulb, which lasts from 750 to 1,000 hours. Photoflash bulbs used for high-speed photography produce a single high-intensity flash of light, lasting a few hundredths of a second, by the ignition of a charge of crumpled aluminum foil or fine aluminum wire inside an oxygen-filled glass bulb. The foil is ignited by the heat of a small filament in the bulb. Increasingly popular among photographers is the high-speed gas-discharge stroboscopic lamp known as an electronic flash. See Stroboscope.
IV
LIGHT-EMITTING DIODES
Light-emitting diodes (LEDs) are devices that emit visible light when an electric current passes through them (see Diode; Electromagnetic Radiation). LEDs are made of semiconductors, or electrical conductors, mixed with phosphors, substances that absorb electromagnetic radiation and reemit it as visible light (see Luminescence). When electrical current passes through the diode the semiconductor emits infrared radiation, which the phosphors in the diode absorb and reemit as visible light. The visible emission is useful for indicator lamps and alphanumeric displays in various electronic devices and appliances.
Organic light-emitting diodes (OLEDs) have the potential to replace incandescent and fluorescent lamps because of their greater energy efficiency and longer lives. Currently, OLEDs are up to 75 percent more efficient than incandescent lamps at similar brightnesses. OLEDs are presently used in cellular telephone displays and MP3 players. However, in 2006, scientists reported a breakthrough in OLED technology that could enable these light-emitting diodes to replace lamps and other types of lighting in homes and offices. Because they are made of wafer-thin layers of plastics and give off little or no heat, walls, ceilings, or even furniture could be used to light a room in place of the traditional lamp. The technology could achieve 100 percent efficiency in converting electricity to light.
V
HISTORY
The earliest experiments in electric lighting were conducted by British chemist Sir Humphry Davy, who produced electric arcs and who also made a fine platinum wire incandescent in air by passing a current through it. Beginning about 1840 a number of incandescent lamps were patented. None were commercially successful, however, both because the vacuum pumps of the time could not create a vacuum strong enough to protect the wire filaments and because electricity was expensive to obtain. In 1878 and 1879, British inventor Joseph Swan and American inventor Thomas Edison simultaneously developed the carbon-filament lamp. Improved vacuum pumps and the increased availability of electricity made these lamps a success. During the same period various arc lamps were introduced. The first practical arc lamp was installed in a lighthouse at Dungeness, England, in 1862. The American pioneer in electrical engineering Charles Francis Brush produced the first commercially successful arc lamp in 1878. Tungsten filaments were substituted for carbon filaments in incandescent lamps in 1907, and gas-filled incandescent lamps were developed in 1913. The fluorescent lamp was introduced in 1938. See also Lamp.
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Monday, March 26, 2007
Electric Lighting
Posted by HackerCakep at 9:09 AM 0 comments
Computer Security
I
INTRODUCTION
Computer Security, techniques developed to safeguard information and information systems stored on computers. Potential threats include the destruction of computer hardware and software and the loss, modification, theft, unauthorized use, observation, or disclosure of computer data.
Computers and the information they contain are often considered confidential systems because their use is typically restricted to a limited number of users. This confidentiality can be compromised in a variety of ways. For example, computers and computer data can be harmed by people who spread computer viruses and worms. A computer virus is a set of computer program instructions that attaches itself to programs in other computers. The viruses are often parts of documents that are transmitted as attachments to e-mail messages. A worm is similar to a virus but is a self-contained program that transports itself from one computer to another through networks. Thousands of viruses and worms exist and can quickly contaminate millions of computers.
People who intentionally create viruses are computer experts often known as hackers. Hackers also violate confidentiality by observing computer monitor screens and by impersonating authorized users of computers in order to gain access to the users’ computers. They invade computer databases to steal the identities of other people by obtaining private, identifying information about them. Hackers also engage in software piracy and deface Web sites on the Internet. For example, they may insert malicious or unwanted messages on a Web site, or alter graphics on the site. They gain access to Web sites by impersonating Web site managers.
Malicious hackers are increasingly developing powerful software crime tools such as automatic computer virus generators, Internet eavesdropping sniffers, password guessers, vulnerability testers, and computer service saturators. For example, an Internet eavesdropping sniffer intercepts Internet messages sent to other computers. A password guesser tries millions of combinations of characters in an effort to guess a computer’s password. Vulnerability testers look for software weaknesses. These crime tools are also valuable security tools used for testing the security of computers and networks.
An increasingly common hacker tool that has gained widespread public attention is the computer service saturator, used in denial-of-service attacks, which can shut down a selected or targeted computer on the Internet by bombarding the computer with more requests than it can handle. This tool first searches for vulnerable computers on the Internet where it can install its own software program. Once installed, the compromised computers act like “zombies” sending usage requests to the target computer. If thousands of computers become infected with the software, then all would be sending usage requests to the target computer, overwhelming its ability to handle the requests for service.
A variety of simple techniques can help prevent computer crimes, such as protecting computer screens from observation, keeping printed information and computers in locked facilities, backing up copies of data files and software, and clearing desktops of sensitive information and materials. Increasingly, however, more sophisticated methods are needed to prevent computer crimes. These include using encryption techniques, establishing software usage permissions, mandating passwords, and installing firewalls and intrusion detection systems. In addition, controls within application systems and disaster recovery plans are also necessary.
II
BACKUP
Storing backup copies of software and data and having backup computer and communication capabilities are important basic safeguards because the data can then be restored if it was altered or destroyed by a computer crime or accident. Computer data should be backed up frequently and should be stored nearby in secure locations in case of damage at the primary site. Transporting sensitive data to storage locations should also be done securely.
III
ENCRYPTION
Another technique to protect confidential information is encryption. Computer users can scramble information to prevent unauthorized users from accessing it. Authorized users can unscramble the information when needed by using a secret code called a key. Without the key the scrambled information would be impossible or very difficult to unscramble. A more complex form of encryption uses two keys, called the public key and the private key, and a system of double encryption. Each participant possesses a secret, private key and a public key that is known to potential recipients. Both keys are used to encrypt, and matching keys are used to decrypt the message. However, the advantage over the single-key method lies with the private keys, which are never shared and so cannot be intercepted. The public key verifies that the sender is the one who transmitted it. The keys are modified periodically, further hampering unauthorized unscrambling and making the encrypted information more difficult to decipher.
IV
APPROVED USERS
Another technique to help prevent abuse and misuse of computer data is to limit the use of computers and data files to approved persons. Security software can verify the identity of computer users and limit their privileges to use, view, and alter files. The software also securely records their actions to establish accountability. Military organizations give access rights to classified, confidential, secret, or top-secret information according to the corresponding security clearance level of the user. Other types of organizations also classify information and specify different degrees of protection.
V
PASSWORDS
Passwords are confidential sequences of characters that allow approved persons to make use of specified computers, software, or information. To be effective, passwords must be difficult to guess and should not be found in dictionaries. Effective passwords contain a variety of characters and symbols that are not part of the alphabet. To thwart imposters, computer systems usually limit the number of attempts and restrict the time it takes to enter the correct password.
A more secure method is to require possession and use of tamper-resistant plastic cards with microprocessor chips, known as “smart cards,” which contain a stored password that automatically changes after each use. When a user logs on, the computer reads the card's password, as well as another password entered by the user, and matches these two respectively to an identical card password generated by the computer and the user's password stored in the computer in encrypted form. Use of passwords and 'smart cards' is beginning to be reinforced by biometrics, identification methods that use unique personal characteristics, such as fingerprints, retinal patterns, facial characteristics, or voice recordings.
VI
FIREWALLS
Computers connected to communication networks, such as the Internet, are particularly vulnerable to electronic attack because so many people have access to them. These computers can be protected by using firewall computers or software placed between the networked computers and the network. The firewall examines, filters, and reports on all information passing through the network to ensure its appropriateness. These functions help prevent saturation of input capabilities that otherwise might deny usage to legitimate users, and they ensure that information received from an outside source is expected and does not contain computer viruses.
VII
INTRUSION DETECTION SYSTEMS
Security software called intrusion detection systems may be used in computers to detect unusual and suspicious activity and, in some cases, stop a variety of harmful actions by authorized or unauthorized persons. Abuse and misuse of sensitive system and application programs and data such as password, inventory, financial, engineering, and personnel files can be detected by these systems.
VIII
APPLICATION SAFEGUARDS
The most serious threats to the integrity and authenticity of computer information come from those who have been entrusted with usage privileges and yet commit computer fraud. For example, authorized persons may secretly transfer money in financial networks, alter credit histories, sabotage information, or commit bill payment or payroll fraud. Modifying, removing, or misrepresenting existing data threatens the integrity and authenticity of computer information. For example, omitting sections of a bad credit history so that only the good credit history remains violates the integrity of the document. Entering false data to complete a fraudulent transfer or withdrawal of money violates the authenticity of banking information. These crimes can be prevented by using a variety of techniques. One such technique is checksumming. Checksumming sums the numerically coded word contents of a file before and after it is used. If the sums are different, then the file has been altered. Other techniques include authenticating the sources of messages, confirming transactions with those who initiate them, segregating and limiting job assignments to make it necessary for more than one person to be involved in committing a crime, and limiting the amount of money that can be transferred through a computer.
IX
DISASTER RECOVERY PLANS
Organizations and businesses that rely on computers need to institute disaster recovery plans that are periodically tested and upgraded. This is because computers and storage components such as diskettes or hard disks are easy to damage. A computer's memory can be erased or flooding, fire, or other forms of destruction can damage the computer’s hardware. Computers, computer data, and components should be installed in safe and locked facilities.
Contributed By:
Donn B. Parker
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 9:05 AM 0 comments
Computer Science
I
INTRODUCTION
Computer Science, study of the theory, experimentation, and engineering that form the basis for the design and use of computers—devices that automatically process information. Computer science traces its roots to work done by English mathematician Charles Babbage, who first proposed a programmable mechanical calculator in 1837. Until the advent of electronic digital computers in the 1940s, computer science was not generally distinguished as being separate from mathematics and engineering. Since then it has sprouted numerous branches of research that are unique to the discipline.
II
THE DEVELOPMENT OF COMPUTER SCIENCE
Early work in the field of computer science during the late 1940s and early 1950s focused on automating the process of making calculations for use in science and engineering. Scientists and engineers developed theoretical models of computation that enabled them to analyze how efficient different approaches were in performing various calculations. Computer science overlapped considerably during this time with the branch of mathematics known as numerical analysis, which examines the accuracy and precision of calculations. (see ENIAC; UNIVAC.)
As the use of computers expanded between the 1950s and the 1970s, the focus of computer science broadened to include simplifying the use of computers through programming languages—artificial languages used to program computers, and operating systems—computer programs that provide a useful interface between a computer and a user. During this time, computer scientists were also experimenting with new applications and computer designs, creating the first computer networks, and exploring relationships between computation and thought.
In the 1970s, computer chip manufacturers began to mass produce microprocessors—the electronic circuitry that serves as the main information processing center in a computer. This new technology revolutionized the computer industry by dramatically reducing the cost of building computers and greatly increasing their processing speed. The microprocessor made possible the advent of the personal computer, which resulted in an explosion in the use of computer applications. Between the early 1970s and 1980s, computer science rapidly expanded in an effort to develop new applications for personal computers and to drive the technological advances in the computing industry. Much of the earlier research that had been done began to reach the public through personal computers, which derived most of their early software from existing concepts and systems.
Computer scientists continue to expand the frontiers of computer and information systems by pioneering the designs of more complex, reliable, and powerful computers; enabling networks of computers to efficiently exchange vast amounts of information; and seeking ways to make computers behave intelligently. As computers become an increasingly integral part of modern society, computer scientists strive to solve new problems and invent better methods of solving current problems.
The goals of computer science range from finding ways to better educate people in the use of existing computers to highly speculative research into technologies and approaches that may not be viable for decades. Underlying all of these specific goals is the desire to better the human condition today and in the future through the improved use of information.
III
THEORY AND EXPERIMENT
Computer science is a combination of theory, engineering, and experimentation. In some cases, a computer scientist develops a theory, then engineers a combination of computer hardware and software based on that theory, and experimentally tests it. An example of such a theory-driven approach is the development of new software engineering tools that are then evaluated in actual use. In other cases, experimentation may result in new theory, such as the discovery that an artificial neural network exhibits behavior similar to neurons in the brain, leading to a new theory in neurophysiology.
It might seem that the predictable nature of computers makes experimentation unnecessary because the outcome of experiments should be known in advance. But when computer systems and their interactions with the natural world become sufficiently complex, unforeseen behaviors can result. Experimentation and the traditional scientific method are thus key parts of computer science.
IV
MAJOR BRANCHES OF COMPUTER SCIENCE
Computer science can be divided into four main fields: software development, computer architecture (hardware), human-computer interfacing (the design of the most efficient ways for humans to use computers), and artificial intelligence (the attempt to make computers behave intelligently). Software development is concerned with creating computer programs that perform efficiently. Computer architecture is concerned with developing optimal hardware for specific computational needs. The areas of artificial intelligence (AI) and human-computer interfacing often involve the development of both software and hardware to solve specific problems.
A
Software Development
In developing computer software, computer scientists and engineers study various areas and techniques of software design, such as the best types of programming languages and algorithms (see below) to use in specific programs, how to efficiently store and retrieve information, and the computational limits of certain software-computer combinations. Software designers must consider many factors when developing a program. Often, program performance in one area must be sacrificed for the sake of the general performance of the software. For instance, since computers have only a limited amount of memory, software designers must limit the number of features they include in a program so that it will not require more memory than the system it is designed for can supply.
Software engineering is an area of software development in which computer scientists and engineers study methods and tools that facilitate the efficient development of correct, reliable, and robust computer programs. Research in this branch of computer science considers all the phases of the software life cycle, which begins with a formal problem specification, and progresses to the design of a solution, its implementation as a program, testing of the program, and program maintenance. Software engineers develop software tools and collections of tools called programming environments to improve the development process. For example, tools can help to manage the many components of a large program that is being written by a team of programmers.
Algorithms and data structures are the building blocks of computer programs. An algorithm is a precise step-by-step procedure for solving a problem within a finite time and using a finite amount of memory. Common algorithms include searching a collection of data, sorting data, and numerical operations such as matrix multiplication. Data structures are patterns for organizing information, and often represent relationships between data values. Some common data structures are called lists, arrays, records, stacks, queues, and trees.
Computer scientists continue to develop new algorithms and data structures to solve new problems and improve the efficiency of existing programs. One area of theoretical research is called algorithmic complexity. Computer scientists in this field seek to develop techniques for determining the inherent efficiency of algorithms with respect to one another. Another area of theoretical research called computability theory seeks to identify the inherent limits of computation.
Software engineers use programming languages to communicate algorithms to a computer. Natural languages such as English are ambiguous—meaning that their grammatical structure and vocabulary can be interpreted in multiple ways—so they are not suited for programming. Instead, simple and unambiguous artificial languages are used. Computer scientists study ways of making programming languages more expressive, thereby simplifying programming and reducing errors. A program written in a programming language must be translated into machine language (the actual instructions that the computer follows). Computer scientists also develop better translation algorithms that produce more efficient machine language programs.
Databases and information retrieval are related fields of research. A database is an organized collection of information stored in a computer, such as a company’s customer account data. Computer scientists attempt to make it easier for users to access databases, prevent access by unauthorized users, and improve access speed. They are also interested in developing techniques to compress the data, so that more can be stored in the same amount of memory. Databases are sometimes distributed over multiple computers that update the data simultaneously, which can lead to inconsistency in the stored information. To address this problem, computer scientists also study ways of preventing inconsistency without reducing access speed.
Information retrieval is concerned with locating data in collections that are not clearly organized, such as a file of newspaper articles. Computer scientists develop algorithms for creating indexes of the data. Once the information is indexed, techniques developed for databases can be used to organize it. Data mining is a closely related field in which a large body of information is analyzed to identify patterns. For example, mining the sales records from a grocery store could identify shopping patterns to help guide the store in stocking its shelves more effectively. (see Information Storage and Retrieval.)
Operating systems are programs that control the overall functioning of a computer. They provide the user interface, place programs into the computer’s memory and cause it to execute them, control the computer’s input and output devices, manage the computer’s resources such as its disk space, protect the computer from unauthorized use, and keep stored data secure. Computer scientists are interested in making operating systems easier to use, more secure, and more efficient by developing new user interface designs, designing new mechanisms that allow data to be shared while preventing access to sensitive data, and developing algorithms that make more effective use of the computer’s time and memory.
The study of numerical computation involves the development of algorithms for calculations, often on large sets of data or with high precision. Because many of these computations may take days or months to execute, computer scientists are interested in making the calculations as efficient as possible. They also explore ways to increase the numerical precision of computations, which can have such effects as improving the accuracy of a weather forecast. The goals of improving efficiency and precision often conflict, with greater efficiency being obtained at the cost of precision and vice versa.
Symbolic computation involves programs that manipulate nonnumeric symbols, such as characters, words, drawings, algebraic expressions, encrypted data (data coded to prevent unauthorized access), and the parts of data structures that represent relationships between values (see Encryption). One unifying property of symbolic programs is that they often lack the regular patterns of processing found in many numerical computations. Such irregularities present computer scientists with special challenges in creating theoretical models of a program’s efficiency, in translating it into an efficient machine language program, and in specifying and testing its correct behavior.
B
Computer Architecture
Computer architecture is the design and analysis of new computer systems. Computer architects study ways of improving computers by increasing their speed, storage capacity, and reliability, and by reducing their cost and power consumption. Computer architects develop both software and hardware models to analyze the performance of existing and proposed computer designs, then use this analysis to guide development of new computers. They are often involved with the engineering of a new computer because the accuracy of their models depends on the design of the computer’s circuitry. Many computer architects are interested in developing computers that are specialized for particular applications such as image processing, signal processing, or the control of mechanical systems. The optimization of computer architecture to specific tasks often yields higher performance, lower cost, or both.
C
Artificial Intelligence
Artificial intelligence (AI) research seeks to enable computers and machines to mimic human intelligence and sensory processing ability, and models human behavior with computers to improve our understanding of intelligence. The many branches of AI research include machine learning, inference, cognition, knowledge representation, problem solving, case-based reasoning, natural language understanding, speech recognition, computer vision, and artificial neural networks.
A key technique developed in the study of artificial intelligence is to specify a problem as a set of states, some of which are solutions, and then search for solution states. For example, in chess, each move creates a new state. If a computer searched the states resulting from all possible sequences of moves, it could identify those that win the game. However, the number of states associated with many problems (such as the possible number of moves needed to win a chess game) is so vast that exhaustively searching them is impractical. The search process can be improved through the use of heuristics—rules that are specific to a given problem and can therefore help guide the search. For example, a chess heuristic might indicate that when a move results in checkmate, there is no point in examining alternate moves.
D
Robotics
Another area of computer science that has found wide practical use is robotics—the design and development of computer controlled mechanical devices. Robots range in complexity from toys to automated factory assembly lines, and relieve humans from tedious, repetitive, or dangerous tasks. Robots are also employed where requirements of speed, precision, consistency, or cleanliness exceed what humans can accomplish. Roboticists—scientists involved in the field of robotics—study the many aspects of controlling robots. These aspects include modeling the robot’s physical properties, modeling its environment, planning its actions, directing its mechanisms efficiently, using sensors to provide feedback to the controlling program, and ensuring the safety of its behavior. They also study ways of simplifying the creation of control programs. One area of research seeks to provide robots with more of the dexterity and adaptability of humans, and is closely associated with AI.
E
Human-Computer Interfacing
Human-computer interfaces provide the means for people to use computers. An example of a human-computer interface is the keyboard, which lets humans enter commands into a computer and enter text into a specific application. The diversity of research into human-computer interfacing corresponds to the diversity of computer users and applications. However, a unifying theme is the development of better interfaces and experimental evaluation of their effectiveness. Examples include improving computer access for people with disabilities, simplifying program use, developing three-dimensional input and output devices for virtual reality, improving handwriting and speech recognition, and developing heads-up displays for aircraft instruments in which critical information such as speed, altitude, and heading are displayed on a screen in front of the pilot’s window. One area of research, called visualization, is concerned with graphically presenting large amounts of data so that people can comprehend its key properties.
V
CONNECTION OF COMPUTER SCIENCE TO OTHER DISCIPLINES
Because computer science grew out of mathematics and , it retains many close connections to those disciplines. Theoretical computer science draws many of its approaches from mathematics and logic. Research in numerical computation overlaps with mathematics research in numerical analysis. Computer architects work closely with the electrical engineers who design the circuits of a computer.
Beyond these historical connections, there are strong ties between AI research and psychology, neurophysiology, and linguistics. Human-computer interface research also has connections with psychology. Roboticists work with both mechanical engineers and physiologists in designing new robots.
Computer science also has indirect relationships with virtually all disciplines that use computers. Applications developed in other fields often involve collaboration with computer scientists, who contribute their knowledge of algorithms, data structures, software engineering, and existing technology. In return, the computer scientists have the opportunity to observe novel applications of computers, from which they gain a deeper insight into their use. These relationships make computer science a highly interdisciplinary field of study.
Contributed By:
Charles C. Weems
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 9:01 AM 0 comments
Computer Science
I
INTRODUCTION
Computer Science, study of the theory, experimentation, and engineering that form the basis for the design and use of computers—devices that automatically process information. Computer science traces its roots to work done by English mathematician Charles Babbage, who first proposed a programmable mechanical calculator in 1837. Until the advent of electronic digital computers in the 1940s, computer science was not generally distinguished as being separate from mathematics and engineering. Since then it has sprouted numerous branches of research that are unique to the discipline.
II
THE DEVELOPMENT OF COMPUTER SCIENCE
Early work in the field of computer science during the late 1940s and early 1950s focused on automating the process of making calculations for use in science and engineering. Scientists and engineers developed theoretical models of computation that enabled them to analyze how efficient different approaches were in performing various calculations. Computer science overlapped considerably during this time with the branch of mathematics known as numerical analysis, which examines the accuracy and precision of calculations. (see ENIAC; UNIVAC.)
As the use of computers expanded between the 1950s and the 1970s, the focus of computer science broadened to include simplifying the use of computers through programming languages—artificial languages used to program computers, and operating systems—computer programs that provide a useful interface between a computer and a user. During this time, computer scientists were also experimenting with new applications and computer designs, creating the first computer networks, and exploring relationships between computation and thought.
In the 1970s, computer chip manufacturers began to mass produce microprocessors—the electronic circuitry that serves as the main information processing center in a computer. This new technology revolutionized the computer industry by dramatically reducing the cost of building computers and greatly increasing their processing speed. The microprocessor made possible the advent of the personal computer, which resulted in an explosion in the use of computer applications. Between the early 1970s and 1980s, computer science rapidly expanded in an effort to develop new applications for personal computers and to drive the technological advances in the computing industry. Much of the earlier research that had been done began to reach the public through personal computers, which derived most of their early software from existing concepts and systems.
Computer scientists continue to expand the frontiers of computer and information systems by pioneering the designs of more complex, reliable, and powerful computers; enabling networks of computers to efficiently exchange vast amounts of information; and seeking ways to make computers behave intelligently. As computers become an increasingly integral part of modern society, computer scientists strive to solve new problems and invent better methods of solving current problems.
The goals of computer science range from finding ways to better educate people in the use of existing computers to highly speculative research into technologies and approaches that may not be viable for decades. Underlying all of these specific goals is the desire to better the human condition today and in the future through the improved use of information.
III
THEORY AND EXPERIMENT
Computer science is a combination of theory, engineering, and experimentation. In some cases, a computer scientist develops a theory, then engineers a combination of computer hardware and software based on that theory, and experimentally tests it. An example of such a theory-driven approach is the development of new software engineering tools that are then evaluated in actual use. In other cases, experimentation may result in new theory, such as the discovery that an artificial neural network exhibits behavior similar to neurons in the brain, leading to a new theory in neurophysiology.
It might seem that the predictable nature of computers makes experimentation unnecessary because the outcome of experiments should be known in advance. But when computer systems and their interactions with the natural world become sufficiently complex, unforeseen behaviors can result. Experimentation and the traditional scientific method are thus key parts of computer science.
IV
MAJOR BRANCHES OF COMPUTER SCIENCE
Computer science can be divided into four main fields: software development, computer architecture (hardware), human-computer interfacing (the design of the most efficient ways for humans to use computers), and artificial intelligence (the attempt to make computers behave intelligently). Software development is concerned with creating computer programs that perform efficiently. Computer architecture is concerned with developing optimal hardware for specific computational needs. The areas of artificial intelligence (AI) and human-computer interfacing often involve the development of both software and hardware to solve specific problems.
A
Software Development
In developing computer software, computer scientists and engineers study various areas and techniques of software design, such as the best types of programming languages and algorithms (see below) to use in specific programs, how to efficiently store and retrieve information, and the computational limits of certain software-computer combinations. Software designers must consider many factors when developing a program. Often, program performance in one area must be sacrificed for the sake of the general performance of the software. For instance, since computers have only a limited amount of memory, software designers must limit the number of features they include in a program so that it will not require more memory than the system it is designed for can supply.
Software engineering is an area of software development in which computer scientists and engineers study methods and tools that facilitate the efficient development of correct, reliable, and robust computer programs. Research in this branch of computer science considers all the phases of the software life cycle, which begins with a formal problem specification, and progresses to the design of a solution, its implementation as a program, testing of the program, and program maintenance. Software engineers develop software tools and collections of tools called programming environments to improve the development process. For example, tools can help to manage the many components of a large program that is being written by a team of programmers.
Algorithms and data structures are the building blocks of computer programs. An algorithm is a precise step-by-step procedure for solving a problem within a finite time and using a finite amount of memory. Common algorithms include searching a collection of data, sorting data, and numerical operations such as matrix multiplication. Data structures are patterns for organizing information, and often represent relationships between data values. Some common data structures are called lists, arrays, records, stacks, queues, and trees.
Computer scientists continue to develop new algorithms and data structures to solve new problems and improve the efficiency of existing programs. One area of theoretical research is called algorithmic complexity. Computer scientists in this field seek to develop techniques for determining the inherent efficiency of algorithms with respect to one another. Another area of theoretical research called computability theory seeks to identify the inherent limits of computation.
Software engineers use programming languages to communicate algorithms to a computer. Natural languages such as English are ambiguous—meaning that their grammatical structure and vocabulary can be interpreted in multiple ways—so they are not suited for programming. Instead, simple and unambiguous artificial languages are used. Computer scientists study ways of making programming languages more expressive, thereby simplifying programming and reducing errors. A program written in a programming language must be translated into machine language (the actual instructions that the computer follows). Computer scientists also develop better translation algorithms that produce more efficient machine language programs.
Databases and information retrieval are related fields of research. A database is an organized collection of information stored in a computer, such as a company’s customer account data. Computer scientists attempt to make it easier for users to access databases, prevent access by unauthorized users, and improve access speed. They are also interested in developing techniques to compress the data, so that more can be stored in the same amount of memory. Databases are sometimes distributed over multiple computers that update the data simultaneously, which can lead to inconsistency in the stored information. To address this problem, computer scientists also study ways of preventing inconsistency without reducing access speed.
Information retrieval is concerned with locating data in collections that are not clearly organized, such as a file of newspaper articles. Computer scientists develop algorithms for creating indexes of the data. Once the information is indexed, techniques developed for databases can be used to organize it. Data mining is a closely related field in which a large body of information is analyzed to identify patterns. For example, mining the sales records from a grocery store could identify shopping patterns to help guide the store in stocking its shelves more effectively. (see Information Storage and Retrieval.)
Operating systems are programs that control the overall functioning of a computer. They provide the user interface, place programs into the computer’s memory and cause it to execute them, control the computer’s input and output devices, manage the computer’s resources such as its disk space, protect the computer from unauthorized use, and keep stored data secure. Computer scientists are interested in making operating systems easier to use, more secure, and more efficient by developing new user interface designs, designing new mechanisms that allow data to be shared while preventing access to sensitive data, and developing algorithms that make more effective use of the computer’s time and memory.
The study of numerical computation involves the development of algorithms for calculations, often on large sets of data or with high precision. Because many of these computations may take days or months to execute, computer scientists are interested in making the calculations as efficient as possible. They also explore ways to increase the numerical precision of computations, which can have such effects as improving the accuracy of a weather forecast. The goals of improving efficiency and precision often conflict, with greater efficiency being obtained at the cost of precision and vice versa.
Symbolic computation involves programs that manipulate nonnumeric symbols, such as characters, words, drawings, algebraic expressions, encrypted data (data coded to prevent unauthorized access), and the parts of data structures that represent relationships between values (see Encryption). One unifying property of symbolic programs is that they often lack the regular patterns of processing found in many numerical computations. Such irregularities present computer scientists with special challenges in creating theoretical models of a program’s efficiency, in translating it into an efficient machine language program, and in specifying and testing its correct behavior.
B
Computer Architecture
Computer architecture is the design and analysis of new computer systems. Computer architects study ways of improving computers by increasing their speed, storage capacity, and reliability, and by reducing their cost and power consumption. Computer architects develop both software and hardware models to analyze the performance of existing and proposed computer designs, then use this analysis to guide development of new computers. They are often involved with the engineering of a new computer because the accuracy of their models depends on the design of the computer’s circuitry. Many computer architects are interested in developing computers that are specialized for particular applications such as image processing, signal processing, or the control of mechanical systems. The optimization of computer architecture to specific tasks often yields higher performance, lower cost, or both.
C
Artificial Intelligence
Artificial intelligence (AI) research seeks to enable computers and machines to mimic human intelligence and sensory processing ability, and models human behavior with computers to improve our understanding of intelligence. The many branches of AI research include machine learning, inference, cognition, knowledge representation, problem solving, case-based reasoning, natural language understanding, speech recognition, computer vision, and artificial neural networks.
A key technique developed in the study of artificial intelligence is to specify a problem as a set of states, some of which are solutions, and then search for solution states. For example, in chess, each move creates a new state. If a computer searched the states resulting from all possible sequences of moves, it could identify those that win the game. However, the number of states associated with many problems (such as the possible number of moves needed to win a chess game) is so vast that exhaustively searching them is impractical. The search process can be improved through the use of heuristics—rules that are specific to a given problem and can therefore help guide the search. For example, a chess heuristic might indicate that when a move results in checkmate, there is no point in examining alternate moves.
D
Robotics
Another area of computer science that has found wide practical use is robotics—the design and development of computer controlled mechanical devices. Robots range in complexity from toys to automated factory assembly lines, and relieve humans from tedious, repetitive, or dangerous tasks. Robots are also employed where requirements of speed, precision, consistency, or cleanliness exceed what humans can accomplish. Roboticists—scientists involved in the field of robotics—study the many aspects of controlling robots. These aspects include modeling the robot’s physical properties, modeling its environment, planning its actions, directing its mechanisms efficiently, using sensors to provide feedback to the controlling program, and ensuring the safety of its behavior. They also study ways of simplifying the creation of control programs. One area of research seeks to provide robots with more of the dexterity and adaptability of humans, and is closely associated with AI.
E
Human-Computer Interfacing
Human-computer interfaces provide the means for people to use computers. An example of a human-computer interface is the keyboard, which lets humans enter commands into a computer and enter text into a specific application. The diversity of research into human-computer interfacing corresponds to the diversity of computer users and applications. However, a unifying theme is the development of better interfaces and experimental evaluation of their effectiveness. Examples include improving computer access for people with disabilities, simplifying program use, developing three-dimensional input and output devices for virtual reality, improving handwriting and speech recognition, and developing heads-up displays for aircraft instruments in which critical information such as speed, altitude, and heading are displayed on a screen in front of the pilot’s window. One area of research, called visualization, is concerned with graphically presenting large amounts of data so that people can comprehend its key properties.
V
CONNECTION OF COMPUTER SCIENCE TO OTHER DISCIPLINES
Because computer science grew out of mathematics and , it retains many close connections to those disciplines. Theoretical computer science draws many of its approaches from mathematics and logic. Research in numerical computation overlaps with mathematics research in numerical analysis. Computer architects work closely with the electrical engineers who design the circuits of a computer.
Beyond these historical connections, there are strong ties between AI research and psychology, neurophysiology, and linguistics. Human-computer interface research also has connections with psychology. Roboticists work with both mechanical engineers and physiologists in designing new robots.
Computer science also has indirect relationships with virtually all disciplines that use computers. Applications developed in other fields often involve collaboration with computer scientists, who contribute their knowledge of algorithms, data structures, software engineering, and existing technology. In return, the computer scientists have the opportunity to observe novel applications of computers, from which they gain a deeper insight into their use. These relationships make computer science a highly interdisciplinary field of study.
Contributed By:
Charles C. Weems
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 9:01 AM 0 comments
Computer Memory
I
INTRODUCTION
Computer Memory, a mechanism that stores data for use by a computer. In a computer all data consist of numbers. A computer stores a number into a specific location in memory and later fetches the value. Most memories represent data with the binary number system. In the binary number system, numbers are represented by sequences of the two binary digits 0 and 1, which are called bits (see Number Systems). In a computer, the two possible values of a bit correspond to the on and off states of the computer's electronic circuitry.
In memory, bits are grouped together so they can represent larger values. A group of eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The particular sequence of bits in the byte encodes a unit of information, such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they have a slightly different meaning when applied to computer memories. In the metric system, kilo- means 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to computer memory, however, the prefixes are measured as powers of two, with kilo- meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power, or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a gigabyte is approximately 1 billion bytes.
II
HOW MEMORY WORKS
Computer memory may be divided into two broad categories known as internal memory and external memory. Internal memory operates at the highest speed and can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information (see Microprocessor). External memory consists of storage on peripheral devices that are slower than internal memories but offer lower cost and the ability to hold data after the computer’s power has been turned off. External memory uses inexpensive mass-storage devices such as magnetic hard drives. See also Information Storage and Retrieval.
Internal memory is also known as random access memory (RAM) or read-only memory (ROM). Information stored in RAM can be accessed in any order, and may be erased or written over. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is usually permanent and cannot be erased or written over.
A
Internal RAM
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor.
DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits.
In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption.
The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.
The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits.
When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.
B
Internal ROM
Read-only memory is the other type of internal memory. ROM memory is used to store items that the computer needs to execute when it is first turned on. For example, the ROM memory on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way that causes the information to remain even when power is turned off.
Information in ROM is usually permanent and cannot be erased or written over easily. A ROM is permanent if the information cannot be changed—once the ROM has been created, information can be retrieved but not changed. Newer technologies allow ROMs to be semi-permanent—that is, the information can be changed, but it takes several seconds to make the change. For example, a FLASH memory acts like a ROM because values remain stored in memory, but the values can be changed.
C
External Memory
External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device, such as a computer's hard drive, uses a surface coated with material that can be magnetized in two possible ways. The surface rotates under a small electromagnet that magnetizes each spot on the surface to record a 0 or 1. To retrieve data, the surface passes under a sensor that determines whether the magnetism was set for a 0 or 1. Optical storage devices such as a compact disc (CD) player use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic medium.
C1
Magnetic Media
Memory stored on external magnetic media include magnetic tape, a hard disk, and a floppy disk. Magnetic tape is a form of external computer memory used primarily for backup storage. Like the surface on a magnetic disk, the surface of tape is coated with a material that can be magnetized. As the tape passes over an electromagnet, individual bits are magnetically encoded. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape).
Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks.
Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.
C2
Optical Media
Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio CD. Because its contents are permanently stored on it when it is manufactured, it is known as compact disc-read only memory (CD-ROM). A variation on the CD, called compact disc-recordable (CD-R), uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.
C3
Magneto-Optical Media
Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, more expensive than CD-ROMs and are used mostly in industrial applications. MO devices are not popular consumer products.
D
Cache Memory
CPU speeds continue to increase much more rapidly than memory access times decrease. The result is a growing gap in performance between the CPU and its main RAM memory. To compensate for the growing difference in speeds, engineers add layers of cache memory between the CPU and the main memory. A cache consists of a small, high-speed memory system that holds recently used values. When the CPU makes a request to fetch or store a memory value, the CPU sends the request to the cache. If the item is already present in the cache, the cache can honor the request quickly because the cache operates at higher speed than main memory. For example, if the CPU needs to add two numbers, retrieving the values from the cache can take less than one-tenth as long as retrieving the values from main memory. However, because the cache is smaller than main memory, not all values can fit in the cache at one time. Therefore, if the requested item is not in the cache, the cache must fetch the item from main memory.
Cache cannot replace conventional RAM because cache is much more expensive and consumes more power. However, research has shown that even a small cache that can store only 1 percent of the data stored in main memory still provides a significant speedup for memory access. Therefore, most computers include a small, external memory cache attached to their RAM. More important, multiple caches can be arranged in a hierarchy to lower memory access times even further. In addition, most CPUs now have a cache on the CPU chip itself. The on-chip internal cache is smaller than the external cache, which is smaller than RAM. The advantage of the on-chip cache is that once a data item has been fetched from the external cache, the CPU can use the item without having to wait for an external cache access.
III
DEVELOPMENTS AND LIMITATIONS
Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer engineers consider it possible to make multigigabyte memory chips and disks capable of storing a terabyte (one trillion bytes) of memory.
Some computer engineers are concerned that the silicon-based memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Engineers also expect that the external dimensions of memory chips will increase by a factor of four, meaning that larger amounts of memory will fit on a single chip. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity, including the use of biotechnology to produce memories out of living cells. The memory in a computer is composed of many memory chips. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM on a single chip. To add to RAM, computer users can purchase memory cards that each contain many memory chips. In addition, future computers will likely have advanced data transfer capabilities and additional caches that enable the CPU to access memory faster.
IV
HISTORY
Early electronic computers in the late 1940s and early 1950s used cathode ray tubes (CRT), similar to a computer display screen, to store data. The coating on a CRT remains lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be written on the CRT, representing 1s and 0s, and then be read back for a short time before fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents. A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4 kilobytes.
International Business Machines Corporation (IBM) developed magnetic core memory in the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of magnetic material woven into meshes of thin wires. When the computer sent a current through a pair of wires, the ring at their intersection became magnetized either clockwise or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the current. Computer manufacturers first used core memory in production computers in the 1960s, at about the same time that they began to replace vacuum tubes with transistors. Magnetic core memory was used through most of the 1960s and into the 1970s.
The next step in the development of computer memory came with the introduction of integrated circuits, which enabled multiple transistors to be placed on one chip. Computer scientists developed the first such memory when they constructed an experimental supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly displaced core and has been the dominant technology for internal memory ever since.
Reviewed By:
Douglas E. Comer
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 8:59 AM 0 comments
Computer Memory
I
INTRODUCTION
Computer Memory, a mechanism that stores data for use by a computer. In a computer all data consist of numbers. A computer stores a number into a specific location in memory and later fetches the value. Most memories represent data with the binary number system. In the binary number system, numbers are represented by sequences of the two binary digits 0 and 1, which are called bits (see Number Systems). In a computer, the two possible values of a bit correspond to the on and off states of the computer's electronic circuitry.
In memory, bits are grouped together so they can represent larger values. A group of eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The particular sequence of bits in the byte encodes a unit of information, such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes. Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they have a slightly different meaning when applied to computer memories. In the metric system, kilo- means 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to computer memory, however, the prefixes are measured as powers of two, with kilo- meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power, or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a gigabyte is approximately 1 billion bytes.
II
HOW MEMORY WORKS
Computer memory may be divided into two broad categories known as internal memory and external memory. Internal memory operates at the highest speed and can be accessed directly by the central processing unit (CPU)—the main electronic circuitry within a computer that processes information. Internal memory is contained on computer chips and uses electronic circuits to store information (see Microprocessor). External memory consists of storage on peripheral devices that are slower than internal memories but offer lower cost and the ability to hold data after the computer’s power has been turned off. External memory uses inexpensive mass-storage devices such as magnetic hard drives. See also Information Storage and Retrieval.
Internal memory is also known as random access memory (RAM) or read-only memory (ROM). Information stored in RAM can be accessed in any order, and may be erased or written over. Information stored in ROM may also be random-access, in that it may be accessed in any order, but the information recorded on ROM is usually permanent and cannot be erased or written over.
A
Internal RAM
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor.
DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits.
In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption.
The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM.
The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits.
When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.
B
Internal ROM
Read-only memory is the other type of internal memory. ROM memory is used to store items that the computer needs to execute when it is first turned on. For example, the ROM memory on a PC contains a basic set of instructions, called the basic input-output system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on computer chips in a way that causes the information to remain even when power is turned off.
Information in ROM is usually permanent and cannot be erased or written over easily. A ROM is permanent if the information cannot be changed—once the ROM has been created, information can be retrieved but not changed. Newer technologies allow ROMs to be semi-permanent—that is, the information can be changed, but it takes several seconds to make the change. For example, a FLASH memory acts like a ROM because values remain stored in memory, but the values can be changed.
C
External Memory
External memory can generally be classified as either magnetic or optical, or a combination called magneto-optical. A magnetic storage device, such as a computer's hard drive, uses a surface coated with material that can be magnetized in two possible ways. The surface rotates under a small electromagnet that magnetizes each spot on the surface to record a 0 or 1. To retrieve data, the surface passes under a sensor that determines whether the magnetism was set for a 0 or 1. Optical storage devices such as a compact disc (CD) player use lasers to store and retrieve information from a plastic disk. Magneto-optical memory devices use a combination of optical storage and retrieval technology coupled with a magnetic medium.
C1
Magnetic Media
Memory stored on external magnetic media include magnetic tape, a hard disk, and a floppy disk. Magnetic tape is a form of external computer memory used primarily for backup storage. Like the surface on a magnetic disk, the surface of tape is coated with a material that can be magnetized. As the tape passes over an electromagnet, individual bits are magnetically encoded. Computer systems using magnetic tape storage devices employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes, and helical-scan tapes (similar to video tape).
Another form of magnetic memory uses a spinning disk coated with magnetic material. As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans across the surface of the disk, reading and writing magnetic spots in concentric circles called tracks.
Magnetic disks are classified as either hard or floppy, depending on the flexibility of the material from which they are made. A floppy disk is made of flexible plastic with small pieces of a magnetic material imbedded in its surface. The read-write head touches the surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the read-write head flying just above its surface on a cushion of air to prevent wear.
C2
Optical Media
Optical external memory uses a laser to scan a spinning reflective disk in which the presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same technology employed in the audio CD. Because its contents are permanently stored on it when it is manufactured, it is known as compact disc-read only memory (CD-ROM). A variation on the CD, called compact disc-recordable (CD-R), uses a dye that turns dark when a stronger laser beam strikes it, and can thus have information written permanently on it by a computer.
C3
Magneto-Optical Media
Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface of the disk heating it up slightly. This allows the magnetic write-head to change the physical orientation of small grains of magnetic material (actually tiny crystals) on the surface of the disk. These tiny crystals reflect light differently depending on their orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to read data from the disk in a way similar to a standard CD-ROM. The advantage of MO disks over CD-ROMs is that they can be read and written to. They are, however, more expensive than CD-ROMs and are used mostly in industrial applications. MO devices are not popular consumer products.
D
Cache Memory
CPU speeds continue to increase much more rapidly than memory access times decrease. The result is a growing gap in performance between the CPU and its main RAM memory. To compensate for the growing difference in speeds, engineers add layers of cache memory between the CPU and the main memory. A cache consists of a small, high-speed memory system that holds recently used values. When the CPU makes a request to fetch or store a memory value, the CPU sends the request to the cache. If the item is already present in the cache, the cache can honor the request quickly because the cache operates at higher speed than main memory. For example, if the CPU needs to add two numbers, retrieving the values from the cache can take less than one-tenth as long as retrieving the values from main memory. However, because the cache is smaller than main memory, not all values can fit in the cache at one time. Therefore, if the requested item is not in the cache, the cache must fetch the item from main memory.
Cache cannot replace conventional RAM because cache is much more expensive and consumes more power. However, research has shown that even a small cache that can store only 1 percent of the data stored in main memory still provides a significant speedup for memory access. Therefore, most computers include a small, external memory cache attached to their RAM. More important, multiple caches can be arranged in a hierarchy to lower memory access times even further. In addition, most CPUs now have a cache on the CPU chip itself. The on-chip internal cache is smaller than the external cache, which is smaller than RAM. The advantage of the on-chip cache is that once a data item has been fetched from the external cache, the CPU can use the item without having to wait for an external cache access.
III
DEVELOPMENTS AND LIMITATIONS
Since the inception of computer memory, the capacity of both internal and external memory devices has grown steadily at a rate that leads to a quadrupling in size every three years. Computer industry analysts expect this rapid rate of growth to continue unimpeded. Computer engineers consider it possible to make multigigabyte memory chips and disks capable of storing a terabyte (one trillion bytes) of memory.
Some computer engineers are concerned that the silicon-based memory chips are approaching a limit in the amount of data they can hold. However, it is expected that transistors can be made at least four times smaller before inherent limits of physics make further reductions difficult. Engineers also expect that the external dimensions of memory chips will increase by a factor of four, meaning that larger amounts of memory will fit on a single chip. Current memory chips use only a single layer of circuitry, but researchers are working on ways to stack multiple layers onto one chip. Once all of these approaches are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring more exotic technologies with the potential to provide even more capacity, including the use of biotechnology to produce memories out of living cells. The memory in a computer is composed of many memory chips. While current memory chips contain megabytes of RAM, future chips will likely have gigabytes of RAM on a single chip. To add to RAM, computer users can purchase memory cards that each contain many memory chips. In addition, future computers will likely have advanced data transfer capabilities and additional caches that enable the CPU to access memory faster.
IV
HISTORY
Early electronic computers in the late 1940s and early 1950s used cathode ray tubes (CRT), similar to a computer display screen, to store data. The coating on a CRT remains lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be written on the CRT, representing 1s and 0s, and then be read back for a short time before fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents. A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4 kilobytes.
International Business Machines Corporation (IBM) developed magnetic core memory in the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of magnetic material woven into meshes of thin wires. When the computer sent a current through a pair of wires, the ring at their intersection became magnetized either clockwise or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the current. Computer manufacturers first used core memory in production computers in the 1960s, at about the same time that they began to replace vacuum tubes with transistors. Magnetic core memory was used through most of the 1960s and into the 1970s.
The next step in the development of computer memory came with the introduction of integrated circuits, which enabled multiple transistors to be placed on one chip. Computer scientists developed the first such memory when they constructed an experimental supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly displaced core and has been the dominant technology for internal memory ever since.
Reviewed By:
Douglas E. Comer
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 8:59 AM 0 comments
Computer Animation
I
INTRODUCTION
Computer Animation, creation of the illusion of motion by viewing a succession of computer-generated still images. Prior to the advent of computers, animation was accomplished by filming hand-drawn or painted sequences on plastic or paper, called cels, one frame at a time. Computers were first used to control the movements of the artwork and the camera. Now computers create the artwork and simulate the camera.
Computer animation can be used to create special effects and to simulate images that would be impossible to show with nonanimation techniques, such as a spacecraft flying by the planet Saturn. Computer animation can also produce images from scientific data, and it has been used to visualize large quantities of data in the study of interactions in complex systems, such as fluid dynamics, particle collisions, and the development of severe storms. These mathematically based models use animation to help researchers see relationships that might otherwise be overlooked. Computer animation has also been used in legal cases to reconstruct accidents.
II
HOW COMPUTER ANIMATION WORKS
In traditional frame-by-frame animation, the illusion of motion is created by filming a sequence of hand-painted cels and then playing the images back at high speeds, typically 14 to 30 frames per second. In computer animation, the art is created using computer programs, frame by frame, and then recorded, edited, and played back.
Another computer animation technique is real-time animation, in which the frames are created using a computer and then immediately displayed on a computer monitor. This technique eliminates the interim step of digitally recording the images; however, real-time animation currently does not produce high quality or richly detailed results. It is best suited for creating simple animations for video games.
III
COMPUTER-ASSISTED ANIMATION
In the traditional process of animation, a storyboard (a scene-by-scene illustration of the plot) is drawn first, the soundtrack is completed, and a senior animator creates key animation frames. Other animators then draw the frames in between the key scenes, color is added, and each frame is then filmed. Computers can be used to assist or replace every phase of this animation process.
A
In-Betweening
The process of creating the intermediate frames to fill in the action from key scene to key scene is called in-betweening. Techniques have been developed that allow the computer to create the in-between frames by estimating common points from key frame to key frame. In the simplest case, the computer draws the in-between movement of two corresponding points by calculating the mid-point distance. Repeated calculation of mid-points can provide the illusion of smooth and continuous motion.
B
Painting Systems
The hand painting of animated cels is a painstaking process, with an average output of 25 cels per day per painter. Sometimes cels are stacked together to create different images—for example, the cels may interact, overlap, or provide backgrounds for one another. When a large number of cels are stacked, the transparent layers become slightly opaque. The cel painter must then compensate for this effect by varying the image colors; this process often introduces errors. Computers can eliminate these errors and increase production by consistently coloring the most complex areas of frames. Computer painting uses a coloring, or filling, process in which the artist specifies a color and then selects a pixel, the smallest individual picture element on the computer screen. The computer then changes all adjacent pixels that have the same color (or nearly the same color) to the newly specified color.
C
Camera Stands and Editing
Once the frames are painted they must be filmed. Traditionally, an animation stand positions both the cels and the camera to allow the layers of cels and the camera to move independently. Computers simulate the animation stand and the camera. The computer controls this virtual camera in three-dimensional space, while focusing on the series of two-dimensional images or cels in the computer's memory. In effect, both the cels and the camera reside within the computer. Special characteristics of real cameras, such as fish-eye lenses and lens flare, can be simulated by the virtual camera. This ability to control a virtual camera, combined with powerful digital video editing, enables the animator to complete the film entirely in a computer-generated environment.
IV
COMPUTER-MODELED ANIMATION
Computer-modeled animation is the process of creating three-dimensional models of animated objects. Typically, this is achieved by representing the objects using the following methods: wire-frame, surface, and solid.
Wire-frame representations are specified by a set of line segments, typically the object's edges and a set of points on the surface called vertices. While a wire-frame representation often does not produce very realistic images, it is good for quick studies, such as how the object will move and fit in a particular scene. Surface representations are specified by a set of primitive features, such as a collection of polygons to produce smooth curves and surfaces. While it is possible to perfectly model an object's surface as a collection of primitive features, it may not be practical to measure and store these features because complex objects may require an infinite number of features to create a perfectly smooth surface. Solid representations are specified by a set of primitive shapes or portions of primitive shapes. For example, a human might be represented by a sphere for the head and cubes that compose the torso and limbs. Solid representations can specify both inner and outer surfaces of an object.
A
Image Rendering
The process of creating a realistic three-dimensional scene is called rendering. The computer is given a detailed description of the objects that comprise the scene, along with the specifications of the camera. To create photographiclike images, the computer must calculate the viewers' perspective of the image, the visible objects and surfaces; add shading, by determining the available light on each surface; add reflections and shadows; provide surfaces with textures, patterns, and roughness to make objects appear more realistic; add transparency of objects; and remove surfaces hidden by other objects (see Computer Graphics).
Once the objects and lights in a three-dimensional scene are rendered, the animator specifies their movement within the scene as well as the motions of the camera. Key frames synchronize the movement of the objects just as in the computer-assisted model, and the in-between frames must be created. One technique, called parametric key-frame animation, interpolates, or blends, the in-between images. Another technique, algorithmic animation, controls motion by applying rules that govern how the objects move. When the objects and their behaviors have been specified, each scene is rendered frame-by-frame by the virtual camera and stored; then the final animated feature is played back.
Despite the power of today's computers, and the innovations used to accelerate traditional animation processes, modern computer animations require still faster and more powerful computers to exploit new techniques and potentially photorealistic effects. In the fully animated Disney feature, Toy Story (1995), it took PIXAR Animation Studios an average of 3 hours to render a single frame, and some frames took as long as 24 hours. For this 77-minute movie, 110,880 frames were rendered, requiring approximately 38 years of computing time distributed among many computers.
Contributed By:
J. Donald Hubbard
Kenneth R. O'Connell
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 8:55 AM 0 comments
Central Processing Unit
I
INTRODUCTION
Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPU’s function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPU’s processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.
II
HOW A CPU WORKS
A
CPU Function
A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computer’s main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.
As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.
B
Branching Instructions
The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.
C
Clock Pulses
The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPU’s circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.
D
Fixed-Point and Floating-Point Numbers
Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPU’s floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intel's Pentium chip.
III
HISTORY
A
Early Computers
In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to today’s microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIAC’s CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.
B
The Transistor
A solution to the problems posed by vacuum tubes came in 1948, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.
C
The Integrated Circuit
Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the millions or even billions of transistors per chip available on today’s CPUs.
In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second.
IV
CURRENT DEVELOPMENTS
The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.
Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.
Contributed By:
Peter M. Kogge
Microsoft ® Encarta ® 2007. © 1993-2006 Microsoft Corporation. All rights reserved.
Posted by HackerCakep at 8:54 AM 0 comments