Top Banner
Guest Editor's Introduction New Computers for Artificial Intelligence Processing Benjamin W Wah University of Illinois at Urbana-Champaign New computer architectures can be used to improve efficiency in the processing of some time- consuming Al tasks, but cannot overcome the combinatorial complexity of AI processing. T his special issue of Computer is about recent efforts to produce T hardware and software architec- tures that support artificial intelligence (Al) applications. The earliest computer design to support Al processing was the implementation of Lisp on the PDP-6 computer and its successors, the PDP-10 and PDP-20, all made by the Digital Equipment Corporation. The half-word instructions and the stack instructions of these machines were developed with Lisp's requirements in mind. Since then, we have been seeing a proliferation of special- purpose computers that can support sym- bolic and Al processing. These efforts in- clude the implementation in hardware of primitive operations fundamental to ap- plications in Al, the design of a collection of primitive architectures to support more complex functions, and the design of system-level architectures to support one or more languages or knowledge-represen- tation schemes. Characteristics of AI computations To develop a special-purpose computer to support Al processing, the require- ments of Al applications must be fully understood. Conventional numerical algo- rithms are usually well analyzed, and bounds on computational performance can be established. In contrast, many Al appli- cations are characterized by symbolic pro- cessing, nondeterministic computations, dynamic execution, large potential for parallel and distributed processing, knowledge management, and open systems. Symbolic processing. Al applications generally process data in symbolic form. Primitive symbolic operations, such as comparison, selection, sorting, matching, logic set operations (union, intersection, and negation), contexts and partitions, transitive closure, and pattern retrieval and recognition are frequently used. At a higher level, symbolic operations on pat- terns such as sentences, speech, graphics, and images may be needed. Nondeterministic computations. Many Al algorithms are nondeterministic, that is, it is impossible to plan with the available information the procedures that must be executed and terminated. This comes from a lack of knowledge and from an in- complete understanding of the problem, and results in the need to enumerate all possibilities exhaustively when the prob- lem is solved. 0018-9162/87/0100-0010$01.00 9) 1987 IEEE COMPUTER
6

Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

Mar 24, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

Guest Editor's Introduction

New Computers forArtificial IntelligenceProcessingBenjamin W WahUniversity of Illinois at Urbana-Champaign

New computer architectures can be used toimprove efficiency in the processing of some time-consuming Al tasks, but cannot overcome thecombinatorial complexity of AI processing.

T his special issue of Computer isabout recent efforts to produce

T hardware and software architec-tures that support artificial intelligence(Al) applications. The earliest computerdesign to support Al processing was theimplementation of Lisp on the PDP-6computer and its successors, the PDP-10and PDP-20, all made by the DigitalEquipment Corporation. The half-wordinstructions and the stack instructions ofthese machines were developed with Lisp'srequirements in mind. Since then, we havebeen seeing a proliferation of special-purpose computers that can support sym-bolic and Al processing. These efforts in-clude the implementation in hardware ofprimitive operations fundamental to ap-plications in Al, the design of a collectionof primitive architectures to support morecomplex functions, and the design ofsystem-level architectures to support oneor more languages or knowledge-represen-tation schemes.

Characteristics of AIcomputationsTo develop a special-purpose computer

to support Al processing, the require-ments of Al applications must be fully

understood. Conventional numerical algo-rithms are usually well analyzed, andbounds on computational performance canbe established. In contrast, many Al appli-cations are characterized by symbolic pro-cessing, nondeterministic computations,dynamic execution, large potential forparallel and distributed processing,knowledge management, and opensystems.

Symbolic processing. Al applicationsgenerally process data in symbolic form.Primitive symbolic operations, such ascomparison, selection, sorting, matching,logic set operations (union, intersection,and negation), contexts and partitions,transitive closure, and pattern retrievaland recognition are frequently used. At ahigher level, symbolic operations on pat-terns such as sentences, speech, graphics,and images may be needed.

Nondeterministic computations. ManyAl algorithms are nondeterministic, thatis, it is impossible to plan with the availableinformation the procedures that must beexecuted and terminated. This comesfrom a lack of knowledge and from an in-complete understanding of the problem,and results in the need to enumerate allpossibilities exhaustively when the prob-lem is solved.

0018-9162/87/0100-0010$01.00 9) 1987 IEEE COMPUTER

Page 2: Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

Dynamic execution. Owing to this lackof complete knowledge and to the uncer-tainty of the solution process, thecapabilities and features of existing datastructures and functions may be defined,and new data structures and functionsmay be created, when the problem is ac-tually solved. Further, the maximum sizefor a given structure may be so large that itis impossible to allocate the necessarymemory space ahead of time. As a result,memory space may have to be dynamicallyallocated and deallocated when the prob-lem is solved.

Large potential for paralle and distrib-uted computing. In parallel processing ofdeterministic algorithms, a set of neces-sary and independent tasks must be foundand processed concurrently. This class ofparallelism is called AND- parallelism. InAl processing, the large degree of nonde-terminism offers an additional source ofparallel processing. Tasks at a nonde-terministic decision point can be processedin parallel. This latter class is called OR-parallelism.

Knowledge management. Knowledge isan important component in reducing thecomplexity of solving a given problem:More useful knowledge means less ex-haustive searching. However, many Alproblems may have a very high degree ofinherent complexity, hence the amount ofuseful knowledge may also be exceedinglylarge. Further, the knowledge acquiredmay be fuzzy, heuristic, and uncertain innature. The management, representation,manipulation, and learning of knowledgeare, therefore, important problems to beaddressed.

Open systems. In many Al applications,the knowledge needed to solve the prob-lem may be incomplete because the sourceof the knowledge is unknown at the timethe solution is devised, or because the en-vironment may be changing and cannot beanticipated at design time. Al systemsshould be designed with an open conceptand allow continuous refinement and ac-quisition of new knowledge.

Design issuesThe essential issues in designing a com-

puter system to support a given Al ap-plication can be classified into therepresentation level, the control level, andthe processor level (see Table 1).

* The representation level deals with theknowledge and methods to solve the prob-lem, and the means to represent it.

* The control level is concerned with thedetection of dependencies and parallelismin the algorithmic and program represen-tations of the problem.

* At the processor level, the hardwareand architectural components needed toevaluate the algorithmic and program rep-resentations are developed.Many current designs start with a given

language or knowledge-representationscheme; hence, the representation level isalready fixed. Research has been focusedon automatic methods to detect parallel-ism, as well as on providing hardware sup-port for time-consuming operations.However, the representation level is an im-portant element in the design process anddictates whether the given problem can besolved in a reasonable amount of time. Atthis time, little has been done in providingtools (a) to aid users in collecting and or-ganizing knowledge or (b) to aid them indesigning efficient algorithms.

Hierarchy of meta-knowledge. Domainknowledge refers to objects, events, andactions per se, while meta-knowledge in-cludes the extent and origin of the domainknowledge ofa particular object, the relia-bility of certain information, and the pos-sibility that an event will occur. In otherwords, meta-knowledge is knowledgeabout domain knowledge. Meta-knowl-edge can be considered as existing in ahierarchy. That is, meta-knowledge is in-volved in deciding the appropriate domainknowledge to apply, while meta-meta-knowledge is the control knowledge aboutthe meta-knowledge. Higher level meta-knowledge is commonsense knowledgeknown to humans.The use of meta-knowledge allows one

to express the partial specification of pro-gram behavior in a declarative language,hence making programs more aesthetic,simpler to build, and easier to modify.Moreover, it facilitates incremental systemdevelopment; that is, one can start from asearch-intensive algorithm and incremen-tally add control information until one ob-tains an algorithm that maybe search-free.Lastly, many knowledge-representationschemes and program paradigms, such aslogic, frame, semantic network, andobject-oriented languages, can be in-tegrated with the aid of meta-knowledge.

There are many open problems relatedto the use of meta-knowledge: its unam-biguous specification, its consistency

verification, the learning of new meta-knowledge, and the use of appropriatestatistical metrics.

Domain-knowledge representation.Declarative representations specify staticknowledge, while procedural representa-tions specify static knowledge as well asthe control information that operates onthe static knowledge.

Declarative representations are referen-tially transparent; that is, the meaning of awhole can be derived solely from themeaning of its parts and is independent oftheir historical behavior. Declarativerepresentations offer a higher potential forparallelism than procedural representa-tions, but are usually associated with alarge search space that may partly counter-act the gains of parallel processing.

In contrast, procedural schemes allowthe specification and direct interaction offacts and heuristic information, henceeliminating wasteful searching. However,they may over-specify the precedence con-straints and restrict the degree of parallelprocessing. When one chooses the ap-propriate representation scheme, trade-offs must be performed as regards theamount ofmemory space required to storethe knowledge, the time allowed for mak-ing inferences, the expected usage of theknowledge, and the underlying computerarchitecture and technological limitations.

Al languages and programming. Con-ventional imperative languages are inade-

January 1987 11

Page 3: Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

quate to AI programming owing to itsinefficiency in symbolic and patternprocessing and its unacceptable program-ming complexity. New AI languagesfeature large declarative power, symbolicprocessing constructs, representation ofinformation by lists, and use of recursionas the only control mechanism. Function-,logic-, and object-oriented languages arethe major programming paradigms usedfor Al today, and hybrids of theseparadigms have been developed. Theselanguages differ in their expressivepower, their ease of implementation,their ability to specify parallelism, andtheir ability to include heuristicknowledge. A language-oriented Alcomputer will inherit all the features andlimitations of the language it implements.

Truth maintenance. Many Al applica-tions are characterized by a lack of consis-tent and complete knowledge at the repre-sentation level. Hence, it may be necessaryto modify the existing knowledge basecontinually and to maintain its consistencyas new knowledge is acquired. Truthmaintenance consists of recognizing an in-consistency, modifying the state to removethe inconsistency, and verifying that all in-consistencies are detected and correctedproperly. The process of removing incon-sistencies may be inconsistent itself, andmay introduce further inconsistencies intothe knowledge base.

Partitioning and restructuring. Theserefer to the reorganization and decompo-sition of the knowledge base and the Alprogram to achieve more efficient process-ing. The issues that need to be consideredare similar to those considered for conven-tional multiprocessing and parallel pro-cessing systems, namely, granularity, staticand dynamic detection of parallelism, andrestructuring. However, the methods toresolve these issues are different. Owing tothe nondeterminism encountered in Alprocessing, AI tasks may be decomposedinto a large number ofsmaller tasks, whichwill influence the design of a special-purpose computer system to support Alprocessing. Many of the proposed Alsystems have a collection of simple pro-cessing elements to execute tasks of smallgranularity and another collection ofmorecomplex processing elements to executetasks of larger granularity.The detection of parallelism is also com-

plicated by the nondirectionality of themodes of variables, the dynamic creationand destruction of data, and the nondeter-

minism. In many languages designed forAI processing, the input/output modes ofvariables and the necessary data structuresare defined at run time. Static analysis,allocation, and scheduling are impossiblehere. Dynamic detection and schedulingdo not give satisfactory performance be-cause of their relatively high overheads fortasks of small granularity. One popular so-lution is to require users to supply addi-tional information in order to allowcompile-time analysis. The amount ofspeedup that parallel processing of nonde-terministic tasks will provide is not clear,although the potential for processing thesetasks in parallel is great. Without ap-propriately guiding the search, restructur-ing the program, and detecting redundantcomputations, much of the power ofparallel processing may be directed towardfruitless searches.

Synchronization. There are two levelsof synchronization: control-level syn-chronization and data-level synchroniza-tion.

In procedural languages, if a statementprecedes another statement in the pro-gram, the implication is that this statementprecedes the second statement if the twostatements share common variables; thatis, control-level synchronization is implicitwhen data-level synchronization is needed.This implicit execution order may over-specify the necessary control-level syn-chronization in the problem.On the other hand, if the tasks are

specified as a set over a number ofdeclarative languages, then control-levelsynchronization is absent, and the set oftasks can be processed in parallel if thetasks do not share common variables.

If the tasks have common variables butare semantically independent, then theyhave to be processed sequentially in an ar-bitrary order to maintain data-level syn-chronization. The difficulty of specifyingcontrol-level synchronization when tasksare semantically dependent is a majorproblem in declarative languages, such asProlog. For example, the decompositionof a set into two subsets in Quicksort mustbe performed before the subsets aresorted. Hence, the tasks for decomposi-tion and for sorting the subsets are bothsemantically dependent and data depen-dent. To overcome this problem, pro-grammers are provided with additionalprimitives, such as the input/outputmodes of variables in a Prolog program,to specify the necessary control-level syn-chronization. These primitives may have

side effects and may not be able to specifycompletely all control-level synchroniza-tion in all situations. These problems mayhave to be dealt with at run time until suf-ficient information is available to solvethem.

In short, there is a trade-off between theexpressive power of the language and theimplementation difficulty in designing a

special-purpose computer system to sup-port an AI language. New languages thatcombine the ability of functional lan-guages to specify parallel tasks and that oflogic languages to specify nondeterminismare evolving.

Synchronization is needed when thesystem is message-based, but may not beneeded in systems that are value-based ormarker-based. In a value-based system,multiple values arriving at a processorsimultaneously are combined into a singlevalue, hence contention will not happenand synchronization is not necessary.Neural networks and the Boltzmannmachine are examples of this class. Insystems supporting marker-passing, suchas NETL and the Connection Machine,markers in a set represent entities with acommon property and are identified in a

single broadcast, hence synchronization isnot necessary.

Scheduling. Scheduling is the selectionof ready tasks to assign to available pro-cessors. It is especially important whenthere is nondeterminism in the algorithm.Scheduling can be static or dynamic. Staticscheduling is performed before the tasksare executed, while dynamic scheduling iscarried out when the tasks are executed.The difficulty in designing a good

scheduler lies in the heuristic metrics toguide the nondeterministic search. Themetrics used must be efficient and ac-curate. Trade-offs must be made amongthe dynamic overhead incurred in com-municating the heuristic-guiding andpruning information, the benefits thatwould be gained if this information led thesearch in the right direction, and thegranularity of tasks.

In practice, the merits of heuristicguiding are not clear, since the heuristic in-formation may be fallible. As a result,some Al architects do not schedule nonde-terministic tasks in parallel. The excessiveoverhead coupled with the fallibility ofheuristic information also leads somedesigners to apply only static scheduling toAI programs.

Micro-level, macro-level, and system-level architectures. The VLSI technology

COMPUTER12

Page 4: Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

that has flourished in the past 10 years hasresulted in the development of many

special-purpose computers.

* Micro-level architectures to supportAl processing consist of architecturaldesigns that are fundamental to applica-tions in AI.

Examples of basic computational prob-lems that are solved efficiently in VLSI are

set intersection, transitive closure, con-texts and partitions, best-match recogni-tion, recognition under transformation,sorting, string and pattern matching, dy-namic programming, selection, and proxi-mity searches. Special features in Al lan-guages that are overhead-intensive can

also be supported by hardware. Examples

of these architectures include the unifica-tion hardware, tag bits for dynamic data-type checking, and hardware stacks.

* The macro-level is an intermediatelevel between the micro-level and the sys-

tem level. Macro-level architectures can bemade up of a variety of micro-level archi-tectures and can perform more complexoperations. Examples include the dic-tionary and database machines, architec-tures for searching, and architectures formanaging dynamic data structures (suchas the garbage-collection hardware).

* The system-level architecturesavailable today are generally orientedtoward one or a few of the languages andknowledge-representation schemes and

designed to provide architectural supportfor overhead-intensive features in theselanguages and schemes. Examples includesystems to support functional program-ming languages, logic languages, object-oriented languages, production rules,semantic networks, and special applica-tions, such as robotics and natural-lan-guage understanding.

Design methodologyThe issues classified in Table I provide a

view to the design methodology for spe-

cial-purpose computers that support Alprocessing. The various approaches can be

About the cover

The explosion of form and color hereand on the cover symbolizes the com-plexity of Al processing-its com-binatorial complexity. The enumerativenature of many algorithms used in Alapplications can lead to endless searchingfor the correct combinations. The key toharnessing this unbridled combinatorialstampede is to establish good heuristicsand efficient computers.The graphics were created by a

graphics program of an iterative processin which the output of the polynomialy = z2 + c provides the input for thesame equation, whose output in turnbecomes input, and so on. The images,in an infinite regress, are made up ofsmaller and smaller clones of the parentimage. The colorless areas of the imageslocate a set of numbers in the complex

plane known in the field of fractalgeometry as the Mandelbrot set-namedafter Benoit Mandelbrot of the IBMT. J. Watson Research Center.14The images were generated at the Cor-

nell National Supercomputer Facility inconjunction with the university'sMathematics Department and TheoryCenter. They were computed on FloatingPoint System's Models 164 or 264 ArrayProcessors attached to an IBM 3090/400Quad Processor mainframe runningVM/370 Extended Architecture with 999million bytes of virtual address space.They were displayed on an AdvancedElectronics Design model 512 graphicstube. The AED 512 was attached to theIBM 3081 via a 9600-baud RS-232 portand a Device Attachment Control Unit7170 high-speed communications line.

They were photographed with a CanonAl Camera equipped with a Vivitar70-150 zoom lens with Fujicolor or Fuji-chrome 100 film at F9.5 and I second.The photos were provided courtesy of

Homer Wilson Smith of Art Matrix,Ithaca, New York, whose stock ofphotos of the Mandelbrot set are

remarkable for their complex beauty.

1. H. W. Smith, Mandelbrot Sets and Julia Sets,Art Matrix Corp., PO Box 880, Ithaca, NY14851-0880.

2. A. K. Dewdney, "Computer Recreations,"Scientific American, August 1985, p. 16.

3. B. B. Mandelbrot, The Fractal Geometry ofNature, W. H. Freeman and Co., New York,1983.

4. Franqois Robert, Discreet Iterations, translatedby Jon Rokne, Springer-Verlag, Berlin, inpress.

January 1987 1313January 1987

Page 5: Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

classified as top-down, bottom-up, andmiddle-out.

Top-down design methodology. Thisapproach starts by defining, specifying,refining, and validating the requirementsof the application; devising methods tocollect the necessary knowledge and meta-knowledge; choosing an appropriate rep-resentation for the knowledge and meta-knowledge; studying problems with thegiven representation scheme that arerelated to the control of correct and effi-cient execution; identifying functional re-quirements of components; and mappingthese components, subject to technologi-cal and cost constraints, into softwareand micro-level, macro-level, and sys-tem-level architectures.The process is iterative. For example,

the representation of knowledge and thelanguage features may be changed or re-stricted if it is discovered that the func-tional requirements cannot be mappedinto a desirable and realizable system withthe given technology and within the setcost. In some projects, the requirementsmay be very loose and span many differentapplications. As a result, the languagesand knowledge-representation schemesused may be oriented toward general-pur-pose usage. The Japanese Fifth-Genera-tion Computer System project is an at-tempt to use a top-down methodology todesign an integrated, user-oriented, intelli-gent system for a number of applications.

Bottom-up design methodology. In thisapproach, the designers first design thecomputer system; the design is based on acomputational model (such as dataflow,reduction, or control-flow) and the tech-nological and cost limitations. Both possi-ble extensions of existing knowledge-representation schemes and languagesdeveloped for Al applications are im-plemented on the given system. Finally, Alapplications are coded by means of therepresentation schemes and languagesprovided. This is probably the most popu-lar approach to applying a general-pur-pose or existing system to Al processing.However, it may result in inefficient pro-cessing, and the available representationschemes and languages may not satisfy theapplication requirements.

Middle-out design methodology. Thisapproach is a short-cut to the top-downdesign methodology. It starts from a prov-en and well-established knowledge-repre-sentation scheme or Al language (most

likely a scheme or language developed forsequential processing) and develops boththe architecture and the modifications tothe language or representation schemethat are necessary to adapt to the applica-tion requirements.

This is the approach taken by many de-signers in designing special-purpose com-puters for Al processing. It may be subdi-vided into top-first and bottom-first,although both may be iterative.

In a top-first middle-out methodology,studies are first performed to modify thelanguage and representation scheme tomake them more adaptable to the archi-tecture and computational model. Primi-tives may be added to the language to fa-cilitate parallel processing. Useful featuresfrom several languages may be combined.The design of the architecture follows.

In the bottom-first middle-out method-ology, hardware support for the overhead-intensive operations enables the chosenlanguage or representation scheme to bemapped directly into architecture andhardware. Applications are implementedby means of the language and representa-tion scheme provided. Lisp computers areexamples of computers designed with thebottom-first middle-out methodology.

The futureTo support efficient processing of Al

applications, research must be done in de-veloping better AI algorithms, better Alsoftware-management methods, and bet-ter Al architectures.The development of better algorithms

could lead to significant improvement inperformance. Many AI algorithms areheuristic in nature, and upper bounds onperformance to solve Al problems havenot been established as they have been intraditional combinatorial problems. As aconsequence, the use of better heuristic in-formation, based on commonsense orhigh-level meta-knowledge and on betterrepresentation of the knowledge, couldlead to far greater improvement in perfor-mance than an improved computer archi-tecture could provide. Automatic learningmethods to aid designers to acquire andmanage the new knowledge in a systematicmanner are very important.

Better AI software-management meth-ods are essential in developing more effi-cient and reliable software for Al process-ing. Al systems are usually open systemsand cannot be defined on the basis of a

closed-world model. The language mustbe able to support the acquisition of newknowledge and the validation of existingknowledge. Probabilistic reasoning andfuzzy knowledge may have to be sup-ported. The verification of the correctnessof AI software is especially difficult owingto the imprecise knowledge involved andthe random way of managing knowledgein a number of declarative languages andrepresentation schemes. Traditional soft-ware-engineering design methodologiesmust be extended to accommodate thecharacteristics of Al applications.The role of parallel processing and in-

novative computer architectures lies in im-proving the processing time needed tosolve a given Al problem. It is importantto realize that parallel processing and bet-ter computer architectures cannot be usedto overcome the exponential complexity ofexhaustive enumeration (unless an ex-

ponential amount of hardware is used)and are not very useful in extending thesolvable problem space. It is unlikely thata problem too large to be solved today by asingle computer in a reasonable amount oftime can be solved by parallel processingalone, even if a linear speedup can beachieved. The decision to implement agiven algorithm in hardware depends onthe complexity of the problem the algo-rithm solves and the frequency of theproblem's occurrence. Problems of lowcomplexity can be solved by sequentialprocessing or in hardware if they are fre-quently encountered; problems of moder-ate complexity should be solved by parallelprocessing; and problems of high com-plexity should be solved by a combinationof heuristics and parallel processing.

In many Al systems being developed to-day, tasks and operations implemented inhardware are those that are frequently exe-cuted and have polynomial complexity.These tasks and operations are identifiedby means of the languages or the knowl-edge-representation schemes supported.The architectural concepts and parallel-processing schemes applied may be eitherwell-known conventional concepts or newconcepts for nondeterministic and dynam-ic processing. The role ofthe computer ar-chitect lies in choosing a good representa-tion, recognizing tasks for maintainingand learning meta-knowledge that areoverhead-intensive, identifying primitiveoperations in the languages and knowl-edge-representation schemes, and sup-porting these tasks and operations in hard-ware and software.

COMPUTER14

Page 6: Guest Introduction New Computers Artificial Processing · quate to AI programming owing to its inefficiency in symbolic and pattern processing andits unacceptable program- ming complexity.

In this issueThis special issue of Computer is a col-

lection of articles describing a number ofprojects in this active area called AIcomputers.The first article, "Computer Architec-

tures for Artificial Inteligence Process-ing," by K. Hwang, J. Ghosh, and R.Chowkwanyun, is a survey of computersfor Al processing. Existing efforts are

classified as multiprocessors supportingMIMD operations, multicomputers sup-

porting multiple SISD processing, andmultipurpose computers operating in an

SIMD, or multiple-SIMD, or MIMD fa-shion. The architecture, languages, execu-

tion paradigms, and principal applicationsof various Al computers are summarized.The second article, "Software Devel-

opment Support for Al Programs," byC. V. Ramamoorthy, S. Shekhar, and V.Garg, presents the problems faced in de-signing the software-development envi-ronment so that it will support all phasesof the software-development cycle of an

AI program: requirement specification,design, implementation, testing, andmaintenance. The evolution of supportfor development of Al programming isdescribed with respect to discrete tools,toolboxes, life-cycle support, knowledge-based tools, and intelligent life-cycle sup-

port environments.The third article, "Symbolics Architec-

ture," by D. A. Moon, details the designphilosophy of and trade-offs in the Sym-bolics Lisp computers. Three levels of thearchitecture-system architecture, in-struction architecture, and processor ar-

chitecture-are discussed.The next two articles discuss systems

for the support of object-orientedprogramming.The fourth article, "The Architecture

of FAIM-1," by J. M. Anderson, W. S.Coates, A. L. Davis, R. W. Hon, I. N.Robinson, S. V. Robison, and K. S.Stevens, presents the design of FAIM-1, a

concurrent, general-purpose, symbolic ac-

celerator for parallel AI symbolic compu-tations. The OIL language supported byFAIM-1 has object-oriented, logic-pro-gramming, and procedural-programmingfeatures.The fifth article, "What Price Small-

talk?" by D. Ungar and D. Patterson,discusses the design of a Reduced Instruc-tion Set Computer for Smalltalk-80. Therequirements of the Smalltalk-80 pro-

gramming environment and the valuablelessons learned by implementing clever

ideas in hardware that does not signifi-cantly improve overall performance arepresented.The sixth article, "Initial Performance

of the DAD02 Prototype," by S. J. Stol-fo, presents the design trade-offs of, im-provements achieved by, and measuredperformance of DAD02, a parallel-processing system for evaluating produc-tion rules and other almost-decomposablesearch problems.The last two articles are concerned with

architectures for support of knowledge-representation schemes.The seventh article, "Applications of

the Connection Machine," by D. L.Waltz, discusses the architecture and ap-plications of the Connection Machine, asystem with massive parallelism. Anumber of applications, including docu-ment retrieval, memory-based reasoning,and natural-language processing, arepresented.The eighth article, "Connectionist Ar-

chitectures for Artificial Intelligence," byS. E. Fahliman and G. E. Hinton, presentsthe designs of both NETL, a marker-pass-ing system implementing semantic net-works, and value-passing systems (whichthe authors exemplify by the Hopfield andBoltzmann iterative networks) for con-straint satisfaction.Owing to page limitations, we were

unable to include two other articlesoriginally accepted for this special issue.One, "The Architecture of Lisp Ma-chines" by A. R. Pleszkun and M. J. Tha-zhuthaveetil, enumerates the runtime re-quirements of a Lisp system and identifiesarchitectural requirements that must bemet for good machine performance. Thesolutions to these requirements in a num-ber of Lisp machines are also discussed.The other, "Computer Architecture forthe Processing of a Surrogate File to a

Very Large Data/Knowledge Base," byP. B. Berra, S. M. Chung, and N. 1. Hach-em, shows the design and performance ofa proposed back-end system to supportthe use of surrogate files in data/knowl-edge bases. The articles wil appear in an

upcoming issue in the near future.Despite the large number of articles in

this special issue, it was not possible to

cover many major projects in this area. Irealize that there are many researchers, toonumerous to mention individually, whohave made notable contributions to thedevelopment of this area of research, and Iapologize for any inadvertent omissions.Two collections of articles 1"2 that I havecompiled also provide reference sources

January 1987

for some of the published articles in thisexciting area. [

AcknowledgmentsI would like to thank the authors and re-

viewers for helping to make this special issue areality. Without them, there would be nospecial issue. The editor-in-chief ofComputer, Mike Mulder, and ComputerEditorial Hoard member Franklin Kuo helpedme through the formalities of publication. Iam also grateful to G. J. Li for his commentsand to K. Lindquist for her secretarialsupport. I would like to acknowledge thesupport of National Science FoundationGrant DMC 85-19649 for this project.

ReferencesI. B. W. Wah and G. J. Li, ComputersforArtiflcial

Intelligence Applications, IEEE ComputerSociety Press, Washington, DC, 1986.

2. B. W. Wah and G. J. Li, "A Survey of Special-Purpose Computer Architectures for Al,"ACM SIGART Newsletter, Number 66, Apr.1986, pp. 28-46.

Benjmnin W. Wah is an associate professor inthe Dept. of Electrical and Computer Engineer-ing and in the Coordinated Science Laboratoryof the University of Illinois at Urbana-Champaign.He was on the faculty of the School of Elec-

trical Engineering at Purdue University be-tween 1979 and 1985.

His current research interests include paraDlelcomputer architectures, artificial intelligence,distributed databases, computer networks, andtheory of algorithms.Wah was an IEEE-CS Distinguished Visitor

between 1983 and 1986. He is an editor of theIEEE Transactions on Software Engineeringand the Journal of Parallel and DistributedComputing. He is the program chairman of the1987 IEEE International Conference on DataEngineering.He has authored Data Management in

Distributed Systems (University Microfilm In-ternational Research Press, 1982), and hascoedited a Tutorial on Computers for Al Ap-plications (IEEE-CS Press, 1986).He received a PhD in computer science from

the University of California at Berkeley in 1979.

Readers may write to Benjamin Wah aboutthis special issue at the University of Illinois atUrbana-Champaign, Coordinated ScienceLaboratory, 1101 W. Springfield Ave., Urbana,IL 61801.

15