text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
i Syntactic Processing Martin Kay Xerox Pals Alto Research Center In computational linguistics, which began in the 1950's with machine translation, systems that are based mainly on the lexicon have a longer tradition than anything else---for these purposes, twenty five years must be allowed to count as a tradition. The bulk of many of the early translation systems was made up by a dictionary whose entries consisted of arbitrary instructions In machine language. In the early 60's, computational llnsulsts---at least those with theoretical pretentlons---abandoned this way of doing business for at least three related reasons: First systems containing large amounts of unrestricted machine code fly in the face of •II principles of good programming practice. The syntax of the language in which linguistic facts are stated is so remote from their semantics that the opportunities for error are very great and no assumptions can be made •bout the effects on the system of Invoking the code associated wlth any given word. The systems became virtually unmaintainabl• and eventually fell under their own weight. Furthermore, these failings were magnified as soon as the attempt was made to impose more structure on the overall system. A general backtracking sohsme. for example, could •11 too easily be thrown into complete disarray by an instruction in s singl• dictionary entry that affected the control stack. Second. the power of general, and particularly nondeterminlstlc, algorithms In syntactic analysis came to be appreciated, if not overappreciated. Suddenly. It was no longer necessary to seek local criteria on which to ensure the correctness of individual decisions made by the program provided they were covered by more global criteria. Separation of program and linguistic data became an overriding principle and. since it was most readily applied to syntactic rules, these became the maln focus of attention. The third, and doubtless the most important, reason for the change was that syntactic theories in which • grammar was seen as consisting of • set of rules. preferably including transformational rules, captured the Imagination of the most influential nonoomputational linguists, and computational linguists followed suite if only to maintain theoretical respsotablllty. In short, Systems with small sets of rules in • constrained formalism and simple lexlcal entries apparently made for simpler. cleaner, and more powerful programs while setting the whole enterprise on a sounder theoretical footing. The trend is now In the opposite direction. There has been a shift of emphasis away from highly structured systems of complex rules as the principle repository of Information •bout the syntax of • language towards • view In which the responsibility ia distributed among the lexicon, semantic parts of the linguistic description, and • cognitive or strategic component, Concomitantly. Interest has shifted from algorithms for syntactic analysis and generation, tn which the control structure and the exact sequence of events are paramount, to systems in which • heavier burden Is carried by the data structure and in which the order o~ events is • matter of strategy. This new trend is • common thread running through several of the papers in this section, Various techniques for syntactic analysis, not•sly those based on some form of Augmented Transition Network (ATN). represent grammatical facts In terms of executabl• machine code. The danger• to which thin exposed the earlier system• •r• avoided by ln~i~tinR that this code by compiled from 8tat•ments in a torm•llsm that allows only for lingutsticaJly motivated operations on carefully controlled parts of certain data structures. The value of nondeterminl•tic procedures is undlmlni•hed, but it has become clear that It does not rest on complex control structures and a rigidly determined sequence of events. In discussing the syntactic processors that we have developed, for example, Ron Kaplan and I no longer flnd it useful te talk in terms of a parsing algorithm. There •re two central data structures, a chart and •n agenda. When additions tO the chart slve rise to certain kinds of configurations in which some element cont•t,s executable code, • task is created and placed on the • good•. Tasks are removed from the agenda and executed in an order determined by strategic considerations which constitute part cf the linguistic theory. Strategy can determine only the order in which alternative analyses are produced. ~any traditional distinctions, such as that between top- down and bottom-up processing, no longer apply to the procedure as a whole but only to partlcuisr strategies or their parts. Thls looser or|snlzatlon of programs for syntactic processing came. at least in pert. from e generally felt need to break down the boundaries that had traditionally separated morphological, syntactic, and semantic processes. Research dlrectad towards speech understanding systems was quite unable to r•spent these boundaries because, in the face of unc,rtair data. local moves in the analysis on one lever required confirmation from other levels so that s common data structure for •II levels of analysis and • schedule that could change continually were of the eseenoe. Puthermore. there was a mouvement from within the artificial-intelligence community to eliminate the boundaries because, frnm that perspective, they lacked sufficient theoretical Justification. Zn speech research In particular, and artificial Intelligence in general, the lexicon took on an important Position if only because it la th,~-~e that the units or meaning reside. Recent pro ..sols t, linguistic theory involve s larger role for the lexicon. Eresnan (1978) has argued persuasively that the full mechanism of transformational rules can. and should, be dispensed with except in cases Of Uhbountte~ movement such me relatlvlutlon and toploallast~cn, The remaining members of the familiar ltst 0¢ transformations can be handled by weaker devices in the lexlcon and, since they all turn out to ~e lexically |•yarned. this IS the appropriate place t~ state the information. Against this background, the papers that follow, different though they are in many usye. constitute fairly coherent set. Cerboflell ~omea ~rom ~ artificial-tntelligenne tradition and IS ge~Qral~) concerned With the meafliflSs of wards end the ways |~ which they are collected to give the mesntnRs of p~par~ hl oxploraa w~ya ~n Nh~oh ~hli prooaaa q~fl ba aHa 50 r~loo5 bank on 15a~1~ ~0 r111 iipl ~fl 5ha l~x~on ~y ~ppropr~nS~ ana!ya%a of 5he seaSoNS, A5 ~5~ bUa~ 5h~ ~eShod %~ fPot~r rrm a%mll~r ~rk %n aynS~a, ~a mtaatnS ~Iman5 Li 5rinSed am 5hou|h %5 hid ~h~Savar proparS~aS allow a =~heren5 mnalym~a o~ ~ha larpr unlS-.-.aay a a~nsanqe, or parairaph~---%n whX~h 15 ~ ttabaddad, Thaaa propar51aa are ~han enSor~ ala%na5 ~5 tn ~h~ %ex%~on for NS.ra .as, The pr~blm, whloh %a fa©~d ~n 5h~a paper, ~ 5ha5 5he ~aOt~lllSy 5ha~ ~ho lqXloOn La dafta~en~ mua5 ~a rased %n ralpa~5 of ~11 ~orda baoauae, even when ~hare %a ~n anSry tn 5hi %ax%con~ 15 moy no5 a~pply 5h~ raid%hi raq~lred Xn 5ha oaaa off hlnd, ~kaa11, %1kS Girb~naL1 ~a oan=arned w%~h 5hl moan~nla of ~orda and hi %a lalid 50 a ~{a. of ~rda aa IQS~VO llenSI, The • l~n Pg~e 9f 5ha l~lSql~l ~a 5o los aa ~oderaSor~ Kwaany and ~nhe%~er have a oonGern ~o ¢arbone~%vao ~en prob~m= at%so ~n ana%yi~a, ~hey Look for deftQtenQlea tn 5he 5ix5 rlSher 5ban ~n 5he ~ex~aon and 5hi rules, Z5 la no Lndtotaen~ of o15hee piper 5hl5 5hly provtde no Hay of dl=51n|ulah%nl 5hi salsa, for ~hls t= olaarl¥ a aaparaSe on~erprtae, Kwuny and $onhatmar prairie proiroaatvel¥ ~iKenln| 5ha requlrwent| 5ho~ 5ha%r aneLyi~a ays~ma mikes of a sepia5 of 5Ix5 so ~haS, Lf t5 does nob mooord wish ~ha boa5 pr%noLpnla of oQmpoa~%on, an anllyaLs san 8~tl1 be round by 5ak~n~ I lea dmand~nl vtew of tS, Suoh a ~tohnLqui olcarly re8~l on I re|~ma %n whloh 5he aoheduXtnl of events 1= rala5%valy free end 5he oon~rol a~ruo~re relo51vely free, 3hip%re 8howl how I a~ronl da~a a~ruotur$ and a weak oon~rol lSruo~ure make L5 polalble ~O ex~end 5he ATN beyond 5he inalyal= of one dlmena&onll aSr~np 5o =amarillo aa~rka. The rnu15 %a a ~o5a1 ayaSem w~Sh remarkable aonata~enoy in 5he meShoda appl%ed I& ill %evils and, praaumably, aorreapondln| a~mplLol&y and olartSy Ln 5he arohl~eo~ure or ~he =ya~m la i whole. AZlen 18 one o~ ~he formoa~ Qon~rlbu&or= ~o reaearoh on 8peeoh undera~nd~ni, end 8poeah prooeailn8 In sonora1. HI aSruala 5he need fop a&ronily Ln~orio~%n~ amponen~a i~ d%~feren~ levol~ of analy=la ~nd, ~o ~ha~ ix~en~, iriues for ~he K%fld Of da~a- d~reo~ed me&hods Z hive ~rted 5o ahlrio~er~ze. A~ ~1r8~ read,ill, [18ifli~ld~*8 paper ippeara leil~ wlll~ni ~o 11e Ln my Procrua~iin bed, for 1~ appears tO be ~on~erned w~h 5hi t%fler pO~flta Of aliorlSl'~t~o dealin and, 50 in ix~in~, 5his La ~rue. ~J~, 5he ~o Ipproaohea ~o 8ynSao~e inaZyola ~hm~ are simpered 5urn ou~ 50 be, In my 5irma, aliorl~h~ollZy ~Hlak. The moi~ fundmen~il tsoue8 ~ha~ are beta| dlaaulaid ~heri~ori 5urn ou~ ~0 oonoern vha~ Z hive sailed ~hi a~ra~iito ocaaponen~ o~ 11niu%s~%o 5hairy, 5ha~ La wish ~he rules aoeordlfli ~o wh%oh aSontto 5i8k8 %n 5he anilya~s princes ire sohedulod. Re~erenoe apiarian, Joan (1978) "A Rei128~o Trina~ormm&%onaZ Granltlar" lfl Halli, oresnin and H~ZIP (ida.) L~niu~a~io Theory lad PayeholoiLoaZ RIIILby, The HZT PPIil.
1979
1
Semantics of Conceptual Graphs John F. Sowa IBM Systems Research Institute 205 East 42nd Street New York, NY 10017 ABSTRACT: Conceptual graphs are both a language for representing knowledge and patterns for constructing models. They form models in the AI sense of structures that approxi- mate some actual or possible system in the real world. They also form models in the logical sense of structures for which some set of axioms are true. When combined with recent developments in nonstandard logic and semantics, conceptual graphs can form a bridge between heuristic techniques of AI and formal techniques of model theory. I. Surface Models Semantic networks are often used in AI for representing meaning. But as Woods (1975) and McDermott (1976) ob- served, the semantic networks themselves have no well-defined semantics. Standard predicate calculus does have a precisely defined, model theoretic semantics; it is adequate for describ- ing mathematical theories with a closed set of axioms. But the real world is messy, incompletely explored, and full of unex- pected surprises. Furthermore, the infinite sets commonly used in logic are intractable both for computers and for the human brain. To develop a more realistic semantics, Hintikka (1973) proposed surface models as incomplete, but extendible, finite constructions: Usually, models are thought of as being given through a specifi- cation of a number of properties and relations defined on the domain. If the domain is infinite, this specification (as well as many operations with such entities) may require non-trivial set- theoretical assumptions. The process is thus often non-finitistic. It is doubtful whether we can realistically expect such structures to be somehow actually involved in our understanding of a sen- tence or in our contemplation of its meaning, notwithstanding the fact that this meaning is too often thought of as being determined by the class of possible worlds in which the sentence in question is true. It seems to me much likelier that what is involved in one's actual understanding of a sentence S is a mental anticipa- tion of what can happen in one's step-by-step investigation of a world in which S is true. (p. 129) The first stage of constructing a surface model begins with the entities occurring in a sentence or story. During the construc- tion, new facts may he asserted that block certain extensions or facilitate others. A standard model is the limit of a surface model that has been extended infinitely deep, but such infinite processes are not a normal part of understanding. This paper adapts Hintikka's surface models to the formal- ism of conceptual graphs (Sowa 1976, 1978). Conceptual graphs serve two purposes: like other forms of semantic net- works, they can be used as a canonical representation of mean- ing in natural language; but they can also be used as building blocks for constructing abstract structures that serve as models in the model-theoretic sense. • Understanding a sentence begins with a translation of that sentence into a conceptual graph. • During the translation, that graph may be joined to frame- like (Minsky 1975) or script-like (Schank & Ahelson 1977) graphs that help resolve ambiguities and incorporate background information. • The resulting graph is a nucleus for constructing models of possible worlds in which the sentence is true. • Laws of the world behave like demons or triggers thai monitor the models and block illegal extensions. • If a surface model could be extended infinitely deep, the result would be a complete standard model. This approach leads to an infinite sequence of algorithms ranging from plausible inference to exact deduction; they are analogous to the varying levels of search in game playing pro- grams. Level 0 would simply translate a sentence into a con- ceptual graph, but do no inference. Level I would do frame- like plausible inferences in joining other background graphs. Level 2 would check constraints by testing the model against the laws. Level 3 would join more background graphs. Level 4 would check further constraints, and so on. If the const- raints at level n+l are violated, the system would have to backtrack and undo joins at level n. If at some level, all possi- ble extensions are blocked by violations of the laws, then that means the original sentence (or story) was inconsistent with the laws. If the surface model is infinitely extendible, then the original sentence or story was consistent. Exact inference techniques may let the surface models grow indefinitely; but for many applications, they are as im- practical as letting a chess playing program search the entire game tree. Plausible inferences with varying degrees of confi- dence are possible by stopping the surface models at different levels of extension. For story understanding, the initial surface model would be derived completely from the input story. For consistency checks in updating a data base, the initial model would be derived by joining new information to the pre- existing data base. For question-answering, a query graph would be joined to the data base; the depth of search permit- ted in extending the join would determine the limits of com- plexity of the questions that are answerable. As a result of this theory, algorithms for plausible and exact inference can be compared within the same framework; it is then possible to make informed trade-offs of speed vs. consistency in data base updates or speed vs. completeness in question answering. 2. Conceptual Graphs The following conceptual graph shows the concepts and relationships in the sentence "Mary hit the piggy hank with a hammer." The boxes are concepts and the circles are concep- tual relations. Inside each box or circle is a type label that designates the type of concept or relation. The conceptual relations labeled AONI". INST. and PTNT represent the linguistic cases agent, instrument, and patient of case grammar. 39 PERSON: Mary Conceptual graphs are a kind of semantic network. See Findler (1979) for surveys of a variety of such networks that have been used in AI. The diagram above illustrates some features of the conceptual graph notation: • Some concepts are generic. They have only a type label inside the box, e.g. mT or HAMMEa • Other concepts are individuaL They have a colon after the type label, followed by a name (Mary) or a unique identifi- er called an individual marker (i22103). To keep the diagram from looking overly busy, the hierarchy of types and subtypes is not drawn explicitly, but is determined by a separate partial ordering of type labels. The type labels are used by the formation rules to enforce selection constraints and to support the inheritance of properties from a supertype to a subtype. For convenience, the diagram could be linearized by using square brackets for concepts and parentheses for conceptual relations: [ PERSON:Mary]-.~ AGNT)-~( HIT:c I ]~--4 INST).~-(HAMMEI~.] [HIT:c I ]4--( PTNT).~---[P[ GO Y-B A NK:i22 I03] Linearizing the diagram requires a coreference index, el, on the generic concept HiT. The index shows that the two occur- rences designate the same act of hitting. If mT had been an individual concept, its name or individual marker would be sufficient to indicate the same act. Besides the features illustrated in the diagram, the theory of conceptual graphs includes the following: • For any particular domain of discourse, a specially desig- nated set of conceptual graphs called the canon, • Four canonical formation rules for deriving new canonical graphs from any given canon, • A method for defining new concept types: some canonical graph is specified as the differentia and a concept in that graph is designated the genus of the new type, • A method for defining new types of Conceptual relations: some canonical graph is specified as the relator and one or more concepts in that graph are specified as parameters, • A method for defining composite entities as structures having other entities as parts, • Optional quantifiers on generic concepts, • Scope of quantifiers specified either by embedding them inside type definitions or by linking them with functional dependency arcs, • Procedural attachments associated with the functional dependency arcs, • Control marks that determine when attached procedures should be invoked. These features have been described in the earlier papers; for completeness, the appendix recapitulates the axioms and defi- nitions that are explicitly used in this paper. Heidorn's (1972, 1975) Natural Language Processor (NLP) is being used to implement the theory of conceptual graphs. The NLP system processes two kinds of Augmented Phrase Structure rules: decoding rules parse language inputs and create graphs that represent their meaning, and encoding ru/es scan the graphs to generate language output. Since the NLP structures are very similar to conceptual graphs, much of the implementation amounts to identifying some feature or combination of features in NLP for each construct in concep- tual graphs. Constructs that would be difficult or inefficient to implement directly in NLP rules can be supported by LISP functions. The inference algorithms in this paper, however, have not yet been implemented. 3. Log/caJ Connect/yes Canonical formation rules enforce the selection constraints in linguistics: they do not guarantee that all derived graphs are true, but they rule out semantic anomalies. In terms of graph grammars, the canonical formation rules are context- free. This section defines logical operations that are context- sensitive, They enforce tighter constraints on graph deriva- tions, but they require more complex pattern matching. For- marion rules and logical operations are complementary mecha- nisms for building models of possible worlds and checking their consistency, Sowa (1976) discussed two ways of handling logical oper- ators in conceptual graphs: the abstract approach, which treats them as functions of truth values, and the direct approach, which treats implications, conjunctions, disjunctions, and nega- tions as operations for building, splitting, and discarding con- ceptual graphs. That paper, however, merely mentioned the approach; this paper develops a notation adapted from Oantzen's sequents (1934), but with an interpretation based on Beinap's conditional assertions (1973) and with computa- tional techniques similar to Hendrix's partitioned semantic networks (1975, 1979). Deliyanni and Kowalski (1979) used a similar notation for logic in semantic networks, but with the arrows reversed. Definition: A seq~nt is a collection of conceptual graphs divided into two sets, called the conditions ut ..... Un and the anergons vt,...,v,,, It is written Ul,...,Un "* vl,...,Vm. Sever- al special cases are distinguished: • A simple assertion has no conditions and only one assertion: -.. v. • A disjunction has no conditions and two or more assertions: ..m. PI,...,Vm. • A simple denial has only one condition and no assertions: u -.... • A compound denial has two or more conditions and no assertions: ut,...,un -... • A conditianal assertion has one or more conditions and one or more assertions: ut,...,un .... Vl....,v~ • An empty clause has no conditions or assertions: --.,. • A Horn clo,ue has at most one assertion; i.e. it is el- ther an empty clause, a denial, a simple assertion, or a conditional assertion of the form ut ..... ,% --4, v. For any concept a in an assertion vi, there may be a con- cept b in a condition u/ that is declared to be coreferent with a. Informally, a sequent states that if all of the conditions are true, then at least one of the assertions must be true. A se. quent with no conditions is an unconditional assertion; if there 40 are two or more assertions, it states that one must be true, hut it doesn't say which. Multiple asserth)ns are necessary for generality, but in deductions, they may cause a model to split into models of multiple altei'native worlds. A sequent with no assertions denies that the combination of conditions can ever occur. The empty clause is an unconditional denial; it is self- contradictory. Horn clauses are special cases for which deduc- tions are simplified: they have no disjunctions that cause models of the world to split into multiple alternatives. Definition: Let C be a collection of canonical graphs, and let s be the sequent ut ..... Un -', vl ..... vm. • If every condition graph is covered by some graph in C, then the conditions are said to be salisfied. • If some condition graph is not covered by any graph in C, then the sequent s is said to be inapplicable to C. If n---0 (there are no conditions), then the conditions are trivially satisfied. A sequent is like a conditional assertion in Belnap's sense: When its conditions are not satisfied, it asserts nothing. But when they are satisfied, the assertions must be added to the current context. The next axiom states how they are added. Axiom: Let C be a collection of canonical graphs, and let s be the sequent ul ..... u, -,- v~ ..... v,,,. If the conditions of s are satisfied by C, then s may be applied to C as follows: • If m,=l) (a denial or the empty clause), the collection C is said to be blocked. • If m=l (a Horn clause), a copy of each graph ui is joined to some graph in C by a covering join. Then the assertion v is added to the resulting collection C'. • If m>2, a copy of each graph ui is joined to some graph in C by a covering join. Then all graphs in the resulting collection C' are copied to make m disjoint c~)llections identical to C'. Finally, for each j from I to rn, whe assertion v I is added to the j-th copy of C'. After an assertion v is added to one of the collections C', each concept in v that was declared to be coreferent with some concept b in one of the conditions ui is joined to that concept to which b was joined. When a collection of graphs is inconsistent with a sequent, they are blocked by it. If the sequent represents a fundamen- tal law about the world, then the collection represents an impossible situation. When there is only one assertion in an applicable sequent, the collection is extended. But when there are two or more assertions, the collection splits into as many successors as there are assertions; this splitting is typical of algorithms for dealing with disjunctions. The rules for apply- ing sequents are based on Beth's semantic tableaux f1955), but the computational techniques are similar to typical AI methods of production rules, demons, triggers, and monitors. Deliyanni and Kowalski (1979) relate their algorithms for logic in semantic networks to the resolution principle. This relationship is natural because a sequent whose conditions and assertions are all atoms is equivalent to the standard clause form for resolution. But since the sequents defined in this paper may be arbitrary conceptual graphs, they can package a much larger amount of information in each graph than the low level atoms of ordinary resolution. As a result, many fewer steps may be needed to answer a question or do plausible inferences. 4. Laws, Facts, and Possible Worlds Infinite families of p~ssible worlds are computationally intractable, hut Dunn (1973) showed that they are not needed for the semantics of modal logic. He considered each possible world w to be characterized by two sets of propositions: laws L and facts F. Every law is also a fact, but some facts are merely contingently true and are not considered laws. A prop- osition p is necessarily true in w if it follows from the laws of w, and it is possible in w if it is consistent with the laws of w. Dunn proved that semantics in terms of laws and facts is equivalent to the possible worlds semantics. Dunn's approach to modal logic can be combined with Hintikka's surface models and AI methods for handling de- faults. Instead of dealing with an infinite set of possible worlds, the system can construct finite, but extendible surface models. The basis for the surface models is a canon that contains the blueprints for assembling models and a set of laws that must be true for each model. The laws impose obligatory constraints on the models, and the canon contains common background information that serves as a heuristic for extending the models. An initial surface model would start as a canonical graph or collection of graphs that represent a given set of facts in a sentence or story. Consider the story, Mary hit the piggy bank with a hammer. She wanted to go to the movies with Janet. but she wouldn't get her allowance until Thursday. And today was only Tuesday. The first sentence would be translated to a conceptual graph like the one in Section 2. Each of the following sentences would be translated into other conceptual graphs and joined to the original graph. But the story as stated is not understanda- ble without a lot of background information: piggy banks normally contain money; piggy banks are usually made of pottery that is easily broken; going to the movies requires money; an allowance is money; and Tuesday precedes Thurs- day. Charniak (1972) handled such stories with demons that encapsulate knowledge: demons normally lie dormant, but when their associated patterns occur in a story, they wake up and apply their piece of knowledge to the process of under- standing. Similar techniques are embodied in production sys- tems, languages like PLANNER (Hewitt 1972), and knowl- edge representation systems like KRL (Bobrow & Winograd 1977). But the trouble with demons is that they are uncon- strained: anything can happen when a demon wakes up, no theorems are possible about what a collection of demons can or cannot do, and there is no way of relating plausible reason- ing with demons to any of 'the techniques of standard or non- standard logic. With conceptual graphs, the computational overhead is about the same as with related AI techniques, but the advan- tage is that the methods can be analyzed by the vast body of techniques that have been developed in logic. The graph for "Mary hit the piggy-bank with a hammer" is a nucleus around which an infinite number of possible worlds can be built. Two individuals, Mary and rlcc~Y-a^NK:iZzloL are fixed, but the particular act of hitting, the hammer Mary used, and all other circumstances are undetermined. As the story continues, some other individuals may be named, graphs from the canon may be joined to add default information, and laws of the world in 41 the form of sequents may be triggered (like demons) to en- force constraints. The next definition introduces the notion of a world bas~ that provides the building material (a canon) and the laws (sequents) for such a family of possible worlds. Definition: A world basis has three components: a canon C, a finite set of sequents L called laws, and one or more finite collections of canonical graphs {Ct ..... Co} called contexts. No context C~ may be blocked by any law in L. A world basis is a collection of nuclei from which complete possible worlds may evolve. The contexts are like Hintikka's surface models: they are finite, but extendible. The graphs in the canon provide default or plausible information that can be joined to extend the contexts, and the laws are constraints on the kinds of extensions that are possible. When a law is violated, it blocks a context as a candidate for a possible world. A default, however, is optional; if con- tradicted, a default must be undone, and the context restored to the state before the default was applied. In the sample story, the next sentence might continue: "The piggy bank was made of bronze, and when Mary hit it, a genie appeared and gave her two tickets to Animal House." This continuation violates all the default assumptions; it would be unreasonable to assume it in advance, but once given, it forces the system to back up to a context before the defaults were applied and join the new information to it. Several practical issues arise: how much backtracking is necessary, how is the world basis used to develop possible worlds, and what criteria are used to decide when to stop the (possibly infinite) extensions. The next sec- tion suggests an answer. 5. Game T h ~ Se~md~ The distinction between optional defaults and obligatory laws is reminiscent of the AND-OR trees that often arise in AI, especially in game playing programs. In fact, Hintikka (1973, 1974) proposed a game theoretic semantics for testing the truth of a formula in terms of a model and for elaborating a surface model in which that formula is true. Hintikka's approach can be adapted to elaborating a world basis in much the same way that a chess playing program explores the game tree: • Each context represents a position in the game. • The canon defines [Sossible moves by the current player, • Conditional assertions are moves by the opponent. • Denials are checkmating moves by the opponent. • A given context is consistent with the laws if there exists a strategy for avoiding checkmate. By following this suggestion, one can adapt the techniques developed for game playing programs to other kinds of reason- ing in AI. Definition: A game over a world basis W is defined by the following rules: • There are two participants named Player and Oppo- m~nt. • For each context in W, Player has the first move. • Player moves in context C either by joining two graphs in C or by selecting any graph in the canon of W that is joinable to some graph u in C and joining it maxi- really to u. If no joins are possible, Player passes. Then Opponent has the right to move in context C. • Opponent moves by checking whether any denials in W are satisfied by C. If so, context C is blocked and is deleted from W. If no denials are satisfied, Oppo- nent may apply any other sequent that is satisfied in C. If no sequent is satisfied, Opponent passes. Then Player has the right to move in context C. • If no contexts are left in W, Player loses. • If both Player and Opponent pass in succession, Player wins. Player wins this game by building a complete model that is consistent with the laws and with the initial information in the problem. But like playing a perfect game of chess, the cost of elaborating a complete model is prohibitive. Yet a computer can play chess as well as most people do by using heuristics to choose moves and terminating the search after a few levels. To develop systematic heuristics for choosing which graphs to join, Sown (1976) stated rules similar to Wilks' preference semantics ( 1975). The amount of computation required to play this game might be compared to chess: a typical middle game in chess has about 30 or 40 moves on each side, and chess playing programs can consistently beat beginners by searching only 3 levels deep; they can play good games by searching 5 levels. The number of moves in a world basis depends on the number of graphs in the canon, the number of laws in L, and the num- ber of ~aphs in each context. But for many common applica- tions, 30 or 40 moves is a reasonable estimate at any given level, and useful inferences are possible with just a shallow search. The scripts applied by Schank and Abelson (1977), for example, correspond to a game with only one level of look-ahead; a game with two levels would provide the plausible information of scripts together with a round of consistency checks to eliminate obvious blunders. By deciding how far to search the game tree, one can derive algorithm for plausible inference with varying levels of confidence. Rigorous deduction similar to model elimination (Loveland 1972) can be performed by starting with laws and a context that correspond to the negation of what is to be proved and showing that Opponent has a winning strategy. By similar transformations, methods of plausible and exact inference can be related as variations on a general method of reasoning. 6. Appendix: Summary of the Formalism This section summarizes axioms, definitions, and theorems about conCeptual graphs that are used in this paper. For a more complete discus- sion and for other features of the theory that are not used here, see the eartier articles by Sown (1976, 1978). Definition 1: A comcepm~ gmmp& is a finite, connected, bipartite graph with nodes of the first kind called concepu and nodes of the second kind called conceptual relatWn$. Definition 2: Every conceptual relation has one or more arc~, each of which must be attached to a concept. If the relation has n arcs. it is said to be n-adic, and its arcs are labeled I, 2 ..... n. The most common conceptual relations are dyadic (2-adic), but the definition mechanisms can create ones with any number of arcs. Although the formal defin/tion says that the arcs are numbered, for dyadic relations. arc I is drawn as an arrow pointin8 towards the circle, and arc 2 as an arrow point/aS away from the circle. 42 Axiom I: There is a set T of type labeLv and a function type. which maps concepts and conceptual relations into T. • If rypefa)=type(b), then a and b are said to be of the same tXpe. • Type labels are partially ordered: if (vpe(a)<_typefhL then a is said to be a subtype of b. • Type labels of concepts and conceptual relations arc disjoint, noncomparable subsets nf T: if a is a concept and • is a concep- tual relation, then a and r may never he of the same type, nor may one be a subtype of the other. Axiom 2: There is a set I=[il, i2, i3 .... } whose elements are called individual markers. The function referent applies to concepts: If a is a concept, then referentla) is either an individual marker in I or the symbol @, which may be read any. • When referentla) ~" l, then a is said to be an individual concept. • When referent(a)=@, then a is said to be a genertc concept. In diagrams, the referent is written after the type label, ~parated by a colon. A concept of a particular cat could be written as ICAT:=41331. A genetic concept, which would refer to any cat, could be written ICA'r:tiiH or simply [CATI. In data base systems, individual markers correspond to the surrogates (Codd 1979). which serve as unique internal identifiers for external entities. The symbol @ is Codd's notation for null or unknown values in a data base. Externally printable or speakable names are related to the internal surrogates by the next axiom. Axiom 3: There is a dyadic conceptual relation with type label NAME. If a relation of type NAME occurs in a conceptual graph, then the con- cept attached to arc I must be a subtype of WORD, and the concept attached to arc 2 must be a subtype of ENTITY. If the second concept is individual, then the first concept is called a name of that individual. The following graph states that the word "Mary" is the name of a particular person: ["Mary"]-.=.tNAME)-=.lPERSON:i30741. if there is only one person named Mary in the context, the graph could be abbreviated to just [PERSON:Mary], Axiom 4: The conformity •elation :: relates type labels in T to individual markers in I. If teT, tel. and t::i. then i is said to conform to t. • If t~gs and t::i. then s::i. • For any type t, t::@. • For any concept c. type(c)::referentfc). The conformity relation says that the individual for which the marker i is a surrogate is of type t. In previous papers, the terms permissible or applicable were used instead of conforms to. but the present term and the symbol :: have been adopted from ALGOL-68. Suppose the individual marker i273 is a surrogate for a beagle named Snoopy. Then BEAGLE::i273 is true. By extension, one may also write the name instead of the marker, as BEAGLE=Snoopy. By axiom 4, Snoopy also conforms to at] supertypes of BEAGLE. such as DOG::Snoopy, ANIMAL=Snoopy. or ENTITY::Snoopy. Definition 3: A star graph is a conceptual graph consisting of a single conceptual relation and the concepts attached to each of its arcs. (Two or more arcs of the conceptual relation may be attached to the same concept. ) Definition 4: Two concepts a and b are said to be joinable if both of the following properties are true: • They are of the same type: type(a)-typefb). • Either referent(a)=referent(b), referent(a)=.@, or referent(b)=.@. Two star graphs with conceptual relations r and s are said to be joinable if • and s have the same number of arcs, type(r),=rype(s), and for each i. the concept attached to arc i of r is joinable to the concept attached to arc i of s. Not all combinations of concepts and conceptual relations are mean- ingful. Yet to say that some graphs are meaningful and others are not is begging the question, because the purpose of conceptual graphs is to form the basis of a theory of meaning, To avoid prejudging the issue, the term canonical is used for those graphs derivable from a designated set called the canon. For any given domain of discourse, a canon is dcl'incd that rules out anomalous combinations. Definition 5: A canon has thrcc components: • A partially ordered ~et T of type labels. • A set I of individual marker~, with a conformily relation ::. • A finite set of conceptual graphs with type or c~Jnccl)lS and conceptual relations in T and wilh referents either let *~r markers in I. The number of possible canonical graphs may be infinite, but the canon contains a finite number from which all the others can be derived. With an appropriate canon, many undesirable graphs are ruled out as noncanonical, but the canonical graphs are not necessari!y true. T~) ensure that only truc graphs are derived from true graphs, the laws discussed in Section 4 eliminate incnnsistcnt combinations. Axiom 5: A conceptual graph is called canontrol eithcr if it is in the c:tnq)n or if it is derivable from canonical graphs by ()ne of the following canonic'a/formation •ules. I,et u and v be canonical graphs (u and v may be the same graph). • Copy: An exact copy of u is canonical. • Restrict: Let a be a concept in u, and let t be a type label where t<_typela) and t::referenrfa). Then the graph obtained by changing the type label of a to t and leaving •eferent(a) unchanged is can- onical. • Join on aconcept: Let a be aconcept in u, and baconcept in v If a and b are joinable, then the graph derived by the followin~ steps is canonical: First delete b from v; then attach to a all arcs of conceptual relations that had been attached to b. If re/'eremfa) e I, then referent(a) is unchanged; otherwise, referent(a) is re- placed by referent(b). • Join on a star: Let r be a conceptual relation in u. and x a con- ceptual relation in v. If the star graphs of r and s are joinable. then the graph derived by the following steps is canonical: First delete s and its arcs from v; then for each i. join the concept attached to arc i of • to the concept that had been attached to arc i of s. Restriction replaces a type label in a graph by the label of a subtype: this rule lets subtypes inherit the structures that apply to more general types. Join on a concept combines graphs that have concepts of the same type: one graph is overlaid on the other so that two concepts of the same type merge into a single concept; as a result, all the arcs that had been connected to either concept arc connected to the single merged concept. Join on a star merges a conceptual relation and all of its attached concepts in a single operation. Definition 6: Let v be a conceptual graph, let v, be a subgraph of v in which every conceptual relation has exactly the same arcs as in v. and let u be a copy of v, in which zero or more concepts may be restricted to subtypes. Then u is called a projection of v. and ¢, is called a projective ortgin of u in v. The main purpose of projections is to define the rule of join on a common projection, which is a generalization of the rules for joining on a concept or a star. Definition 7: If a conceptual graph u is a projection of both v and w. it is called a common projection of v and w, Theorem l: If u is a common projection of canonical graphs t, and w, then v and w may be joined on the common projection u to form a canonical graph by the following steps: • Let v' be a projective origin of u in v. and let w, be a projective origin of u in w. • Restrict each concept of v, and ~ to the type label of the corre- sponding concept in u. • Join each concept of v, to the corresponding concept of w,. • Join each star graph of ¢ to the corresponding star of ~ 43 The concepts and conceptual relations in the resulting graph consist of those in v-t~, w-~, and a copy of u. Definition 8: If v and w are joined on a common projection u. then all concepts and conceptual relations in the projective origin of u in v and the projective origin of u in ~v are said to be covered by the join. in particular, if the projective origin of u in v includes all of v. then the entire graph v is covered by the join. and the join is called a covering join of v by w, Definition 9: Let v and w be joined on a common projection u. The join is called extendible if there exist some concepts a in v and b in w with the following properties: • The concepts a and b were joined to each other. • a is attached to a conceptual relation • that was not covered by the join. • b is attached to a conceptual relation s that was not covered by the join. • The star graphs of r and s are joinable. If a join is not extendible, it is called mn.ximal. The definition of maximal join given here is simpler than the one given in Sown (1976), but it has the same result. Maximal joins have the effect of Wilks' preference rules (1975) in forcing a maximum connectivity of the graphs. Covering joins are used in Section 3 in the rules for apply- ing sequeots. Theorem 2: Every covering join is maximal. Sown (1976) continued with further material on quantifiers and procedural attachments, and Sown (1978) continued with mechanisms for defining new types of concepts, conceptual relations, and composite entities that have other entities as parts. Note that the terms sort, aubaort, and well-formed in Sown (1976) have now been replaced by the terms type, subtype, and canonical. 7. Acknowledgment I would like to thank Charles Bontempo, Jon Handel, and George Heidorn for helpful comments on earlier versions of this paper. 8. References Belnap, Nuei D., Jr. (1973) "Restricted QuanUfication and Conditional Assertion." in Leblanc (1973) pp. 48-75. Beth. E. W. (1955) "Semantic Entailment and Formal Derivability," reprinted in J. Hintikka, ed., The Philoaapky of Mathematk~s, Oxford University Press, 1969. pp. 9-41. Bobrow. D. G.. & T. Winograd (1977) "An Overview of K]RL-O, a Knowl- edge Representation Language," Cognitive $cicnca, voL 1, pp. 3-46. Charniak, Eugene (1972) Toward~ a Model of Chiid~n's Story Coml~ehen- rion. AI Memo No. 266, MIT Project MAC, Cambridge, Mall. Codd. E. F. (1979) "Extending the Data Base Relational Model to Cap- ture More Meaning," to appear in Transactions on Dataha~ $yst#ma. Dellyanni. Amaryllis. & Robert A. Kowalski (1979) "Logic and Semantic Networks." Communications of the ACM, voL 22, no. 3, pp. 184--192. Dunn. J. Michael (1973) "A Truth Value Semantics for Modal Logic," in Leblanc (1973) pp. 87-100. Findler, Nicholas V., ed. (1979) Associative Networks, Academic Press, New York. Gentzen. Gerhard (1934) "Investigations into Logical Deduction," reprint- ed in M. E. Szabo, ed., The Collected Papers of Gerhard Gentxon. North-Holland. Amsterdam, 1969. pp. 68-131. Heidorn. George E. (1972) Natural LangUage [nput~ to a Simulation Programming System. Technical Report NPS-55HD72101A, Naval Postgraduate School. Monterey. Heidorn, George E. (1975) "Augmented Phrase Structure Grammar." in R. Schank & B. L, Nash-Webber. eds.. Theoretical Issues in Natural Lunguage Processing, pp. 1-5. Hendrix, Gary G. (1975) "Expanding the Utility of Semantic Networks through Partitioning," in proc. of the Fourth IJCAi, Tbilisi, Georgia, USSR, pp. 115-121. Hendrix. Gary G. (1979) "Encoding Knowledge in Partitioned Networks," in Findler (1979) pp. 51-92. Hewitt, Carl (1972) Description and Theoretical Analys~ (Using Schemata) o[ PLANNER. AI Memo No. 251, MIT Project MAC, Cambridge. Mass. Hintiid~a. Jaakko (1973) "Surface Semantics: Definition and its Motiva- tion," in Leblanc (1973) pp. 128-147. Hintikka, Jaakko (1974) "Quantifiers vs. Quantification Theory," Lingu/a- tic Inq,,~ry, vol. 5, no. 2. pp. 153-177. Hintikka, Jaakko. & Esa Saarinen (1975) "Semantical Games and the Bach-Peters Paradox." Theoretical Linguistics. vol. 2, pp. 1-20. Leblanc. Hughes, ed. (1973) Truth. Syntax. and Modaliry, North-Holland Publishing Co.. Amsterdam. Loveland. D. W. (1972) "A Unifying View of Some Linear Herbrand Procedures," Journal of the ACM, voi. 19, no. 2, pp. 366-384. McDermott, Drew V. (I 976) "Artificial Intelligence Meets Natural Stupid- ity," SIGART Newalerler. No. 57, pp. 4-9. Minsky, Marvin (1975) "A Framework for Representing Knowledge." in Winston, P. H., ed.. The Psychology of Computer Vision. McGraw-Hill, New York. pp. 211-280. Schank, Roger, & Robert Abelson (1977) Scripts. Pla~, Goals and Under- standing, Lawrence Eribeum Associates, Hillsdale. N. J. Sown, John F. (1976) "Conceptual Graphs for a Data Base Interface," [BM Jaurnal of Research & Development, vol. 20, pp. 336-357. Sown, John F. (1978) "Definitional Mechanisms for Conceptual Graphs," presented at the International Workshop on Graph Grammars, Bad Hormef, Germany, Nov. 1978. Wilks, Yorick (1975) "Preference Semantics," in E. L. Keenan, ed., Formal Semantics of Nazurol Language. Cambridge University Press, pp. 329-348. Woods, William A. (1975) "What's in a Link: Foundations for Semantic Networks," in D. G. Bobrow & A. Collins. eds., Rapraenmtion and Unabnmnding, Academic PresS. New York. 44
1979
10
ON THE AUTOMATIC TRANSFORMATION OF CLASS MEMBERSHIP CRITERIA Barbara C. Sangster Rutgers University This paper addresses a problem that may arise in c]assificatzon tasks: the design of procedures for matching an instance with a set ~f criteria for class membership in such a way as to permit the intelligent handling ~f inexact, as well as exact matches. An inexact match is a comparlson between an instance and a set of criteria (or a second instance) which has the result that some, but not all, of the criteria described (or exemplified) in the second are found to be satisfied in the first. An exact match is such a comparison for which all of the criteria of the second are found to be satisfied in the first. The approach presented in this paper is t~ transform the set of criteria for class membership into an exemplary instance of a member of the class, which exhibits a set ~f characteristics whose presence is necessary and sufficient for membership in that class. Use of this exemplary instance during the matching process appears to permit important functions associated with inexact matching to be easi]y performed, and also to have a beneficial effect on the overaJ] efficiency of the matching process. 1. INTRODUCTION An important common element ~f many projects in Artificial Intelligence is the determination of whether a particular instance satisfies the criteria for membership in a particular class. Frequently, this task is a component of a larger one involving a set of instances, or a set of classes, or both. This determination need not necessarily call for an exact match between an instance and a set of criteria, but only for the "best ," or "closest ," match, by some definition of goodness or closeness. One important specification for such tasks is the capability for efficient matching procedures; another is the ability to perform inexact, as we]] as exact matches. One step towards achieving efficient matching procedures is 50 represent criteria for class membership in the same way as descriptions ~f instances. This may be done by transforming the set of criteria, through a process of symbolic instantiation, into a kind of prototypical instance, or exemplary member of the class. This permits the use of a simple matching algorithm, such as one that merely checks whether required components of the definition of the class are also present in the description of the instance. This also permits easy representation of modifications to the definition, whenever the capability of inexact matching is desired. Other ways of representing definitions of ciasses might be needed for other purposes, however. For example, the knowledge-representation language AIMDS would normally be expected to represent definitions in a more complex manner, involving the use of pattern-directed inference rules. These rules may be used, e.g., to identify inconsistencies and fill in unknown values. A representation of a definition derived through symbolic instantiation does not have this wide a range of capabilitles, but it does appear to offer advantages over the other representation for efficient matching and for easy handling of inexact matches. We might, The research reported in this paper was partially supported by the National Science Foundation under Grant #S0C-7811q08 and by the Research Foundation of the State University of New York under Grant #150-2197-A. therefore, like to be able to translate back and forth between the two forms of representation as our needs require. An algorithm has been devised for automatically trans]ating a definition in one of the two directions -- from the form using the pattern-directed inference rules intn a simpler, symboJical]y instantiated form [11]. This algorithm has been shown to work correctly for any well-formed definition in a clearly-defined syntactic class [10]. The use of the symbolically instantiated form for b~th exact and inexact matches is outlined here; using a hand-created symbolic instantiation, a run demonstrating an exact match is presented. The paper conc]udes with a discussion ~f some implications of this apprnach. 2. INRXAC T MATCHING The research project presented in this paper was motivated by the need for determining automatically whether a set of facts comprising the description of a legal case satisfies the conditions expressed in a legs/ definition, and, if not, in what respects it fails to satisfy those conditions [8], [9], [I0], [11], [13]. The need to perform this task is central to a larger project whose purpose is the representation of the definitions of certain legal concepts, and of decisions based on those concepts. inexact matching arises in the legal/judlclal domain when a legal class must be assigned to the facts of the case at hand, but when an exact match cannot be roland between those facts and any of the definitions of possible legal classes. In that situation, a reasonable first-order approximation to the way real decisions are made may be to say that the class whose definition offers the "best" or " closest" match to the facts of the case at hand is the class that should be assigned to the facts in question. That is the approach taken in the current project. In addition to the application discussed here (the assignment of an instance of a knowledge structure to one of a set of classes), inexact matching and close relatives thereof are also found in several other domains within computational linguistics. Inexact matching to a knowledge structure may also come into play in updating a knowledge base, or in responding to queries over a knowledge base [5], [6]. In the domain of syntax, an inexact matching capability makes possible the correct interpretation of utterances that are not fully grammatical with respect to the grammar being used [7]. In the domains of speech understanding and character recognition, the ability to perform inexact matching makes it possible to disregard errors caused by such factors as noise or carelessness of the speaker or writer. When an inexact match of an instance has been identified, the first step is to attempt to deal with any criteria ~nich were not found to be satisfied in the instance, but were not found not to be satisfied either -- i.e., the unknowns. At that point, if an exact match still has not been achieved, two modes of action are possible: the modification of the instance whose characterization is being sought, or the modification of the criteria by means of which a characterization is found. The choice between these two responses (or of the way in which they are combined) appears to be a function of the domain and sometimes also of the particular item in question. In general, in the 45 lesallJudlcial domain, the facts of the case, once determined, are fixed (~nless new evidence is introduced), hut the criteria For assigning a legal characterization to those facts may be modified. 3. I ~ Z ~ ~ E t ~ ~ A p.mh+mtM~my Because of. the importance of inexact ~atchlnE in the legal/judlclal domain, it is desirable to utilize a matehir~ procedure that permits useful functions related to inexant matching to be performed conveniently. Such functions include a way of. easily determining all the respects in which attempted exact matches to a particular definition might fail , a wey of. easily determinln~ what chlln~es to a definition would be suf.f.icient For an exact match with a particular case to be permitted, and a wey of ensuring that a contemplated modif.lcation to a def.inition will not introduce inconsistencies. Two f.eatures of. a representational scheme that would appear to help in performin~ these functions conveniently are SPEC1) that the scheme permit a distinction to be made between those propositions that must be t~ be true of. any instance satlsfylng the def.lnltion and any other propositions that might also be true of. the instance, and SPEC2) that the scheme permit the former set of. propositions to be expressed in a simple, ulilf.led wey, so as to redune or even eliminate the need for inf.erencing and other processing activities when the ~ntlons outlined above are performed. By satlsfyi~ SPECl, we permit the propositions which are central to the matohiDg process to he distir~ulshed from any others; by satisfying SPEC2, we permit those propositions to be accessed and manipulated (e,go, for the inexact matching Functions listed above) in an efficient and straightforward manner. Thus, the Fulfillment of 3PECI and SPEC2 slgniflcantly strengthens our ability to perform Functions central to the inexact matching process. A representational scheme that meets these specifications has been designed, and an experimental implementation performed. The approach used is to precede the matching activity proper with a one-tlme preprocessing phase, duping Milch the definition is automatically transformed from the form in which it is originally expressed into a representational scheme which appears to be more suitable to the matching task at hand. The transformation algorithm makes use of a distlnntion between those components of the definition wl~ich must be Found to be true and those whose truth either may be inferred or else is irrelevant to the matching process. The transformation is performed by means of a process of ~ inmtRntlat~nn OF the deflnition -- the translation of the de/initlon f~'om a set of criteria for satisfying the definition into an exemplary instance of the concept itself. The transformed definition resulting fro m this process appears to meet the speclf.ications given above. The input to the transformation process is a definition expressed in two parts: CCHPONENTI) a set of propositions eonslsting of relations between typed variables organized in frame form, and CCI4POMENT2) a set of' pattern-directed inference rules expressing constraints on how the propositions in CCHPONEMTI .my be Instantlated. 'rite propositions in COHPONENTI include propositions that must be found to be true of. any instance satisfying the +,,,,,=-,nor ~ o , ~ " .... //7 "°"~ Yf~NO ;~ p~ec.l ]I ÷ ,.,,o~+~"r }.i~ ~';'+'+.''''+'. , : CONPONENT1 for a staple n. 46 definition, as well as other pr~positions that do not have this quality. The output from the trans{ormation process that is used for matching with an instance is a symbolically instantiated form of the definition called the KERNEL fo~ the definition. It consists solely of a set of propositions expressing relations between instances. These are precisely those propositions whose truth must be observed in any instance satisfying the definition. Constraints on instantiation (COMPONENT2 above) are reflected in the choice of values for the instances in these propositions. Thus the KERNEL structure has the properties set forth in SPECI and SPEC2 above, and its use during the matching process may consequently be expected to help in w~rking with inexact matches. For similar reasons, use of the KERNEL structure appears also to permit a significant improvement in efficiency of the overall matching process [I0], [11]. The propositions input to the transformation process (i.e., COMPONENTI) are illustrated, for the definition of a kind of corporate reorganization called a BREORGANIZATION, in Figure I; the arcs represent relations, and the nodes represent the types of the instances between which the relations may ho]d. Several of the pattern-directed inference rules input to the transformation process (COMPONENT2) for part of the same definition are illustrated in Figure 2. The KERNEL structure for that definition output by the transformation process is illustrated in Figure 3. The propositions shown there are the ones whose truth is necessary and sufficient for the definition to have been met. Bindings constraints between nodes are reflected in the labels of the nodes; the nodes in Figure 3 represent instances. Thus, the two components represented in Figures I and 2 are transformed, for the purposes of matching, into the structure represented in Figure 3, The transformation process is described in more detail in [I0] and [11]; [10] also contains an informal proof that the transformation algorithm will work correctly for all definitions in a well-defined syntactic class. ~. ~X~CUTIONOFTHEMATCHINOPR~CESS Once the transformation of a definition has been performed, it need never again be repeated (unless the definition itself should change), and the compiled KERNEL structure may be used directly whenever a set of ((EXCHANOE X) |FF ((EXCHANOE X) IFF C(EXCHANOE X) ZFF ((EXCHANOE X) {FF TRANSI (TRAI4S T|) (X (TRANSFEROR1ACENTOF) T1) (X (TRANSPROP20BJECTQF) T1) (X (TRANSFEROR10LDO~NEROF) T|) (X (TRANSFEROR2 NEWOWNEROF) TI)] TRANS2 (TRN~S 1"2) (X (TRANSFEROR2 AOENTOF) T2) (X (TRANSPRQP~ OBJECTOF) T2) (X (TRANSFEROR2 OLDONt4ERQF) T2) (X (TRANSFERORt NEWOWNEROF) ~)3 TRANSFEROR! (ACTOR A) (X (TRANSI AOENT) A) (X (TRANSI OLDOWNER) A) (X (TRANS2 NENOWNER) A)] TRANSFEROR= (ACTOR A) (X (TRANS2 AOENr) A) (X (TRAN~2 OLDO~,qER) A) (X (THANS| NEiJO~NER) A)] Ffi_•u_re ~: A portion of COMPONENT2 or a sample definition. facts comprising a description of a legal c;Jse L~ presented-for comparison with the def(nit~n. In order to control possib]e combinat~ric diffLcu]+[es, the KERNEL structure is decomposed tnt~ a se t ~r small networks, against each of which a]] substructures ~f the same type in the case description are tes+ed f~r a structural match (STAGEI). DMATCH [15], a functL~n written by D. Touretzky, performed structural ma+chLng in the experimental implementation. The hope LS the + "small networks" can be selected from the KERNEL in such a way that matching to any single small n~twork wi|] involve a minimal degree of combinator[c compiexEty. For an exact match, the substructures that survive STAGEI (and no others) are then combined in all p~ssibie valid ways into larger networks ~f s~me degree ~f increase in complexity. A structural match ~f each ~f these structures with the corresponding substructure ~f the KERNEL is then attempted, and bindings c~nstraints between formerly separate components of the new network are thereby tested. This process is repeated wLth surviving substructures until the structural match is conducted against the KERNEL structure itself. When +he criterion for matching at each stage Ls an exact match, as described above, the survivors of the final s~age ~f structural matching represent all and ~n]y the subcases in the case description that meet the c~ndi+i~ns expressed in the definition. The execution of the marcher in the manner described above is illustrated in Figure 4. For this example, five instances of the type TRANS (TI, T2, T3, T4, TL), two instances of the type CONTROL (CI, C2), and ~wo instances of PROPERTY (06, 09) were used. The value of MAKEFULLLIST shows the survivors of STAGEI. The value of BGO shows the single valid instance of a BREORGANIZATION that can be created fr-m these components. An inexact matching capability, not currently implemented, would determine, when at any stage a match failed, I) why it had failed, and 2) how close it ned come to being an exact ms+oh. At the next stage, a combination of substructures would be submitted for consideration by the marcher only Lf it had met some criterion of proximity t~ an exact match -- either on an absolute scale, or relative to the ~ther candidates for matching. When the final stage ~f the matching process had been completed, that candidate (or those candidates) that permitted the most nearly exact match could then be Selected. In order to perform the inexact matching function outlined in the preceding paragraph, an a]g-rithm for computing distance from a exact match must be formulated. For the reasons given above, we anticipate that I) the transformation of definitions into the corresponding KERNEL structures will make that task easier, and that 2) once a distance algorit~ has been formulated, the use of the KERNEL structLLPe will contribute to performing the inexact matching f~/nction wlth efficiency and conceptual clarity. 5. CONCLUSIONS The capability for the intelligent handling of inexact matches ham been shown to be an important requirement for the representation of certain classification +.asks. A procedure has been outlined ~nereby a set of criteria for membership in a particular class may be transformed into an exemplary instance of a member of that class. 47 /y ~ ~ ~ ~o~ KeG KC.T K AS'~K CoR ffL K'r,! K~-3" ~m Ko~ : The KERNEL structure for a ftnttJon. As we have seen, use of that exemplary instance during [3] Hayes-Roth, F. 1978. "The Role of Partial and Best the matchinK process appears to permit important Y4atches in Knowledge Systems", ~ functions associated with inexact matchlnK to be easily ~ ~ , ed. by D. Waterma~ and F. performed, and also to have a bene/icial affect on the Hayes-Roth. Academlc Press. overall effiolency 0~' the matahinK process. [4] Hayes-Roth, F. and D. J. Hostow. 1975. "An ACKHQWL~DCEMENT$ Automatically Compilable Eecosnltlon Network for Structured Patterns". ~ ~ IJCAI-?%, vol. 1, The author is gratet%ll to the followin8 for cos-Mints and pp. 2~6-251. suKgestions on the work reported on in this paper: S. Amarel, V. Cissielski, L. T. MoCarty, T. Mitchell, C5] Joshi, A. K. 1978a. "Some Extensions of a System N. S. Sridha~an, and D. Touretzky. for Inference on Partial I41foMlationn. P~ttePn.Dir,~ted ~ , ed. by D. Waterman and F. R~RLTC~;RAPH¥ Hayes-Noth. Aoad clio PFess. [I] Freuder, £. C. 1978. "Syntheslzln~ Constraint [6] Joshi, A. K. 1978b. "A Nots on Partial Match of Expressions". CACM, vol. 21, pp. 958-966. Desorlptlcns: Can One Simultaneously Question (Retrieve) and Inform (Update) ?" . ?TRLA P-2 : [2] Haralick, R. M. and L. G. ShapirO. 1979. "The ~ ~ 1;1 ~ ~ ~,nsnxL~=£. Consistent LabelllnK Problem: Part I". TRRR ~ a , PINI0 re1. I, pp. 173-18~. [7] Kwasny, S. and N. K. Sondhelmsr, 1979. • U~raJaatioallty and Extra-Gr-,-,-tlcality in ~atu~al Language U~derstandlnK Systems". This volume. SECOND-CON tEXT )) (BQO) Enter HAKEFtS ~l Z81": ! PROTS ,, (PROTOTRANS$ PRQTOTRAN~ PROTOCONI"ROLI PROTO09 PROTO06) HAKEFULLLXST ~ ((0~) (Oh 09) (CI (:;2) (T'J T4 TS) (T2 T4 TS)) ((T'J T~ C2 09 06) Nil.) ~ : Sample execution of the process. 48 [8] McCarty, L. T. 1977. "Reflections on TAXMAN: An Experiment in Artificial Intelligence and Legal Reasoning". HarvmrdL~w Review, vo1. 90, pp. 837-893. [9] McCarty, b. T., N. 3. Sridharan, and B. C. Sangster. 1979. "The Implementation of TAXMAN II: An Experiment in Artificial Intelligence and Legal Reasoning". Rutgers University Report #LCSR-TR-3. [10] Sangster, B. C. 1979a. "An Automatically Cempilable Hierarchical Definition Marcher". Rutgers University Report #LRP-TR-3. [11] Sangster, B. C. 1979b. "An Overview of an Automatically Compilab]e Hierarchical Definition Hatcher". Promeedln~fthe TJCAI-7q. [12] Sridharan, N. S. 1978a. (Ed.) "AIMDS User Manual, Version 2." Rutgers University Report #CBM-TR-89. [13] Sridharan, N. S. 1978b. "Some Relationships between BELIEVER and TAXMAN". Rutgers University Report #LCSR-TR-2. [14] Srinivasan, C. V. 1976. "The Architecture of Coherent Information System: A General Problem 3olving System". T~E Trana~tion~on~, VOl. 25, pp. 390-402. [15] Touretzky, D. 1978. "Learning from Examples in a Frame-Based System". Rutgers University Report #CBM-TR-87. [16] Woods, W. A. 1975. "What's in a Link: Fot~ldations for Sema/ltio Networks". In Renresentation Under~tAndinl, ed. by D. G. Bobrow and A. Collins. Academic Press. 49
1979
11
A SNAPSHOT OF KDS A KNOWLEDGE DF_,LIVERY SYSTEM James A. Moore end William C. Mann USCIlnformaUon Sciences Institute Marina del Ray, CA June, 1979 SUMMARY KDS Is a computer program which creates multl-par~raph, Natural Language text from a computer representation of knowledge to be delivered. We have addressed a number of Issues not previously encountered In the generation of Natural Language st the multi-sentence level, vlz: ordering among sentences and the scope of each, quality comparisons between alternative 8~regations of sub-sententJal units, the coordination of communication with non-linguistic activities by • gcel-pursuin~ planner, end the use of dynamic models of speaker and hearer to shape the text to the task at hand. STATEMENT OF THE PROBLEM The task of KDS is to generate English text under the following constraints: 1. The source of information Is a semantic net, having no a priori structuring to facilitate the outputtlng task. This represents the most elaborate performance of KDS to date. SYSTEM DESIGN The KDS organization reflects our novel paradigm: FRAGMENT- AND-COMPOSE. KDS decomposes the original network into fragments then orders and 8~regatas these according to the dictates of the text-producing task, not according to the needs for which the internal representation was originally conceived. KDS has shown the feasibility of this approach. The KDS organization Is a simple pipeline: FRAGMENT, PLAN, FILTER, HILL-CLIMB, and OUTPUT. FRAGMENT transforms the selected portion of the semantic net into an unordered set of propositions which correspond, roughly, to minimal sentences. 2. The text is produced to satisfy an explicit goal held by the text generating system, which describes a desired cognitive state of the reader. 3. To achieve the desired state of the reader requires more than a single sentence. R ESULTS This is not the forum for a extensive analysis of our results; for details, see Mann and Moore [ 1979]. However, to communicate the flavor of what ~ve have accomplished--from the motivating goal: (WANTS SPEAKER (KNOWS HEARER F ldlE-ALARM-SCENE)) and about two pages of formal propositions describing the "Fire-alarm scene', KDS generated the following: W Aeneeor there is a ]'ire, the ~arm s~jttm is started whic~ Jounds the a/arm bell and starts the timer. W~n it b ninety seconds a[tor t~t timer L~ started, unless tAe o/arm J~$tem is cancelled it co~IS Wells Farfo. Wht~ Wells Far~ is called, ff CO~Is tat Fire Dept.. W t)en ~ou kear tkt o/arm bill or ~ou smell smoke, stop ¢utr~tMng. determine mheth,r there Is a firs and decide to permit the alarm J~stem or to cancel it. When ),ou dttermine mketker there Is a [ire. l[ t~ere iS, permit t~t alarm S~sttm; otherwise cancel it..W~en ~ou permit the alarm syst~, c~! the Fire Dept. if possible and [oilo~ tkt w~uatlon procedure. When ~ carroll tke elate s~)sttet, l[ it iS mote t~an n~ner~ seconds since the timer is started, tke alarm s.Tsttm e~ls Wells Fargo: ockormlse continue tmrrytldng. PLAN uses goal-sensitive rules to impose an ordering on this set of fragments. A typical planning rule is: "When conveying a scene in which the hearer is to identify himself with one of the actors, express ell propositions involving that actor AFTER those which do not, and separate these two partitions by a paragraph break'. FILTER, deletes from the set, ell propositions currently represented as known by the hearer. HILL-CLIMB coordinates two sub-activities: AGGREGATOR applies rules to combine two or three fragments into a single one. A typical aggregation rule is: "The two fragments 'x does A' and 'x does B' can be combin~! into a single fragment: 'x does A and B'". PREFERENCER evaluates each proposed new fragment, producing a numerical measure of its "goodness". A typical preference rule is: "When instructing the hearer, lncremm the accumulating measure by 10 for each occurrence of the symbol 'YOU'". HILL-CLIMB uses AGGREGATOR to generate new candidate sets of fregments, and PREFERENCER, to determine which new set presents the best one-step improvement over the current set. The objective function of HILL-CLIMB has been enlarged to also take into ecceunt the COST OF FOREGONE OPPORTUNITIES. This has drastically improved the initial performance, since the topology abounds wtth local maxima. KDS has used, at one time or another, on the order of 10 planning rules, 30 aggregation rules and 7 preference rules. 51 The aggregation and preference rules are directly analogoua to the capabilities of linguistic eempotence and performance, respectively. OUTPUT lsa simple (two pages of LISP) text generator driven by a context free grammar. ACKNOWLEDGMENTS The work reported here was supported by NSF Grant MCS- 76-07332. REFERENCES Levin, J. A., and Goldman, N. M., Process models of reference in context, I$I/RR-78o72, Information Sciences Institute, Marina del Re),, CA, 1978. Levin, J.A., and Moore, J.A., Dialogue Gamest mete- communication structures for natural bnguqe interaction, Co~ltive Science, 1,4, 1978. Mann, W. C., Moore, J. A., and Levin, J. A., A comprehension model for human dialogue, in Proo. IJCAI-V, Cambridge, MA, 1977. Mann, W.C., and Moore, J.A., Computer generation of multl-paraq~raph English text, in preparation. Moore, J. A., Levin, J. A., and Mann, W. C., A Gool-orianted model of human dialogue, AJCL microfiche 67, 1977. Moore, J.A., Communication as a problem-solviq activity, in preparation. 52
1979
12
The Use of Ooject-Specl flc Knowledge in Natural Language Processing Mark H. Bursteln Department of Computer Science, Yale University 1. INTRODUCTION it is widely reco~nlzed that the process of understandln~ natural language texts cannot be accomplished without accessin~ mundane Knowledge about the world [2, 4, 6, 7]. That is, in order to resolve ambiguities, form expectations, and make causal connections between events, we must make use of all sorts of episodic, stereotypic and factual knowledge. In this paper, we are concerned with the way functional knowledge of objects, and associations between objects can be exploited in an understandln~ system. Consider the sentence (1) Jonn opened the Oottle so he could pour the wine. Anyone readin~ this sentence makes assumptions about what happened which go far beyond what is stated. For example, we assume without hesitation that the wine beln~ poured came from inside the bottle. Although this seems quite obvious, there are many other interpretations wnlcn are equally valid. Jonn could be fillin~ the bottle rather than emptyln~ the wine out of it. In fact, it need not be true that the wine ever contacted the bottle. There may have been some other reason Jonn had to open the bottle first. Yet, in the absence of a larger context, some causal inference mechanism forces us (as human understanders) to find the common interpretation in the process of connecting these two events causally. In interpreting this sentence, we also rely on an understanding of what it means for a bottle to be "open". Only by usin~ Knowledge of what is possible when a bottle Is open are able we understand why John had to open the Pottle to pour the wine out of It. Stron~ associations are at work here nelpin~ us to make these connections. A sentence such as (2) John closed the bottle and poured the wine. appears to be self contradictory only because we assume that the wine was in the bottle before applyln~ our knowledge of open and closed bottles to the situation. Only then do we realize that closing the bottle makes it impossible to pour the wine. Now consider the sentence (3) John turned on the faucet and filled his glass. When reading this, we immediately assume that John filled his glass with water from the faucet. Yet, not only is water never mentioned in the sentence, there is nothing there to explicitly relate turning on the faucet and filling the glass. The glass could conceivably be filled with milk from a carton. However, in the absence of some greater context which forces a different interpretation on us, we immediately assume that the glass is being filled with water from the faucet. Understanding each of these sentences requires that we make use of associations we have In memory between oPJects and actions commonly InvolvlnE those objects, as • This wore was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by the Office of Naval Research under contra:t N0001~-75-C-1111. well as relations between several different objects. This paper describes a computer program, OPUS (Object Primitive Understanding System) which constructs a representation of the meanings of sentences such as those above, including assumptions that a human understander would normally make, by accessin~ these types of associative memory structures. This stereotypic knowledge of physical objects Is captured in OPUS using Object Primitives [5]. Object Prlmitlves (or) were designed to act in conjunction with Scnank's conceptual dependency representational system [11]. The processes developed to perform conceptual analysis in OPUS involved the integration of a conceptual analyzer similar to RlesOec~'s ELl [g] with demon-like procedures for memory interaction and the introduction of object-related inferences. 2. OBJECT PRIMITIVES The primary focus In this research has been on the development of processes which utillze Information provided by Object Primitives to facilitate the "comprehension" of natural language texts by computer. That Is, we were primarily concerned with the introduction of stereotyplc knowledge of objects into the conceptual analysis of text. By encoding information in OP descriptions, we were able to increase the interpretive power of the analyzer in order to handle sentences of the sort discussed earlier. What follows Is a brief description of the seven Object Primitives. A more thorough discussion can be found in [5]. For those unfamiliar with the primitive acts of Schank's conceptual dependency theory, discussions of wnlch can be found in [10,11]. The Object Primitive CONNECTOR Is used to indicate classes of actions (described in terms of Sohank*s primitives acts) which are normally enabled by the object being described. In particular, a CONNECTOR enables actions between two spatial regions. For example, a window and a door are both CONNECTORs which enable motion (PTRANS) of objects through them when they are open. In addition, a window Is a CONNECTOR which enables the action ATT£ND eyes (see) or MTRANS (acquisitlon of Information) by the instrumental action AI"rEND eyes. These actions are enabled regardless of whether the window is open or closed. That Is, one can see through a window, and therefore read or observe things on the other side, even when the window is closed. In the examples discussed above, the open bottle ls glven a CONNECTOR description, rnis will be discussed further later. A SEPARATOR disenables a transfer between two spatial regions. A closed door and a closed window are both SEPARATORs which dlsenable the motion between the spatial regions they adjoin. In addition, a closed door is a SEPARATOR which dlsenables the acts MTRANS by A~END eyes (unless the door is transparent) or ears. That Is, one is normally prevented from seeing or hearing through a closed door. Similarly, a closed window is a SEPARATOR which dlsenables MTRANS with Instrument ATTENO ears, although, as mentioned aoove, one can still see through a closed window to the other side. A closed bottle is another example of an object with a SEPARATOR description. It should be clear by now that objects de,bribed using Object Primitives are not generally described by a single primitive. In fact, not one out several sets of 53 primitive descriptions may be required. This Is illustrated above by the combination of CONNECTOR and SEPARATOR descriptions required For a closed window, while a somewhat different set Is required For an open window. These sets of descriptions form a small set of "states" which the object may Oe in, each state corresponding to a set of inferences and asSociations approriate to the object in that condition. A SOURCE description indicates that a aajor function of the object described is to provide the user of that object with some other object. Thus a Faucet is a SOURCE o[ water, a wtne bottle ls a SOURCE of wine, and a lamp is a SOURCE of the phenomenon called light. SOURCEs often require some sort of activation. Faucets must be turned on, wine bottles must be opened, and lamps are either turned on or lit depending on whether or not they are elsctrJo. The Object Frlmltlve CONSUMER Is used to describe objects whose primary Function Is to cons, me other objects. A trash can is a CONSUMER of waste paper, a draln is a CONSUMER of liquids, and a mailbox ts a CONSUMER of mail. Some objects are both SOURCEs and CONSUMERS. A pipe is a CONSUMER of tobacco and a SOURCE of smoke. An Ice cube tray Is a CONSUMER of water and a SOURCE of ice cu~es. Many objects can be described In part by relationships that they assu~e with some other objects. These relations are described ustn~ the Object Primitive RELATZONAL. Containers, such as bottles, rooms, cars, etc., have as part of their descriptions a containment relation, which may specify defaults For the type of object contained. Objects, such as tables and chairs, wnloh are commonly used to support other objects will be described with a support relation. Objects such as buildings, cars, airplanes, stores, etc., are all things which can contain people. As such, they are often distinguished by the activities which people in those places engage in. One important way OF encoding those activities is by referring to the scripts which describe them. The Object Primitive SETTING is used to capture the asscclatlons between a place and any script-like activities that normally occur there. It can also be used to indicate other, related SETTINGs which the object may be a part of. For example, a dinin~ car has a SETTING description wlth a llnK both to the restaurant script and to the SETTING For passenger train. This information Is important For the establishment OF relevant contexts, giving access to many domain specl/lc expectations which wlll subsequently be available to guide processtn~ ~oth during conceptual analysis of lexical input and when making InFerences at higher levels oF nogntttve processing. The Final Object Primitive, GESTALT, is used to characterize objects which have recognizable, and separable, eubparts. " Trains, hi-Fi systems, and Kitchens, all evoke Images of objects charaoterlzable by describing their subparts, and the way that those subparts relate to fOrm the whole. The OcJect Primitive GESTALT is used to capture this type of description. Using thls set of prlmltlves as the Foundation For a memory representation, we can construct a more general hi-directional associative memory by introducing some associative links external to object primitive decompositions. For example, the conceptual description of a wine bottle will Include a SOURCE description For a bottle where the SOURCE output is specified as wine. This amounts to an associative link From the concept OF a wine bottle to the concept of wine. But how can we construct an assoolatlve llnK From wlne back to wlne bottles? ~lne does not have an object primitive decomposition wnloh involves wine bottles, so we must resort to some construction which Js external to object primitive decompOsitions. Four associative links have been proposed [5], each of which pOints to a particular object primitive description. For the problem of wine and wine Dottles, an associative OUTPUTFROH link is directed from wlne to tne SOURCE description of a wine bottle. This external link provides us with an associative link From wine to wine bottles. 3. I~U~ROORAM I will now describe the processing ot two sentences very similar to those discussed earlier. The computer program (OPUS) which performs the Following analyses was developed usin~ a con:eptual analyzer written by Larry Eirnbaum [1]. OPUS was then extended to include a capacity For setting up and Firing "demons" or .triggers" as they are called In K~L [3]. The Functioning of these demons will be Illustrated below. 3.1 THE INITIAL ANALXSIS In the processing of the sentence "Jo~n opened the bottle so he could pour the wine," the phrase "John opened the bottle," is analyzed to produce the Followin~ representation: SJohne : eDOe result ehottlee CONNECTOR ENABLES ?HUMO <:> PTRANS ~- ?OBJ <--~>-- ?X L. < (INSIDE SELF) (or) > (INSIDE SELF) r- PTRANS <- ?OBJ <-~ ?HUMO <=> L- < ?¥ (or) ?HUMO <=> A'r'rzSD <. ?S£NS£ <--E~ ?OBJe • (where ?OBJ Is inside SELF) Here 3ELF refers to the object bein~ described (the bottle) and ?--- indicates an unfilled slot. eJohne here stands For the internal memory representation For a person wlth the name John. Memory tokens rot John and the bottle are constructed by a general demon which is trtg&ered during conceptual analysis whenever a PP (the internal representation For an object) is Introduced. OF descriptions are attached to each object token. This dtagrem represents the assertion that John did something which caused the bottle to assume a state where its CONNECTOR description applied. The CONNECTOR description indicates that something can be removed from the bottle, put into the bottle, or Its contents can be smelled, looked at, or generally examined by some sense modsltty. This CONNECTOR description Is not part oF the definition of the word 'open'. It is specific Knowledge that people have about what it means to say that a ~ottle IS open. In striving at the ~bove representation, the program must retrieve From memory this OF description of what it means For a bottle to be open. This information is stored Peneath its prototype For bottles. Presumably, there Is also script-like information about the different methods For opening bottles, the different types of caps (corks, twist-off, ...), and which method is appropriate For which cap. However, For the purpose of understanding a text which does not re/er to a specific type of bottle, asp, or opentn~ procedure, what is important is the information aoout how the bottle can 54 then be used once it is opened. This is the kind of knowledge that OOJect Primitives were designed to capture. When the analyzer builds the state description of the bottle, a general demon associated with new state descriptions is triggered. This demon is responsible for updating memory by adding the new state information to the token in the ACTOR slot of the state description. Thus the bottle token is updated to include the gtven CONNECTOR description. For the purposes of this program, the bottle is then considered to be an "open" bottle. A second function of this demon is to set up explicit expectations for future actions based on the new information. In this case, templates for three actions the program might expect to see described can be constructed from the three partially specified conceptualizations shown above In the CONNECTOR description of the open bottle. These templates are attached to the state descrJptlon as possible consequences of that state, for use when attempting to infer the causal connections between events. 3.2 CONCEPT DRIVEN INFERENCES The phrase "so ne could pour the wine." Is analyzed as eJohn~ ~.> enable PTRANS <- ewinee <~_>F ?X i < (INSIDE ?CONTAINER) When thls representation is built by the analyzer, we do not know that the the wine being poured came from the previously mentioned bottle. This inference Js made in the program by a slot-filling demon called the CONTAINER-FINDER, attached to the primitive act PTRANS. The demon, triggered when a PTRANS from Inside an unspecified container is built, looks on the iist of active tokens (a part of snort term memory) for any containers that might be expected to contain the substance moved, in this case wine. This is done by applying two tests to the objects In snort term memory. The first, the DEFAULT-CONTAINMENT test, looks for objects described by the RELATIONAL primitive, indicating that they are containers (link = INSIDE) with default object contained being wine. The second, the COMMON-SOURCE test, looks for known SOURCEs of wine by following the associative OUTPUTFROM link from wlne. If either of these tests succeed, then the object found is inferred to be the container poured from. At dlfferent times, either the DEFAULT-CONTAINMENT test or the COMMON-SOURCJ~ test may be necessary in order to establish probable containment. For example, i t is reasonable to expect a vase to contain water since the RELATIONAL description of a vase has default containment slots for water and flowers. But we do not always expect water to come from vases since there is no OUTFUTFROM link from water to a SOURCE description of a vase. If we heard "Water spilled when John bumped the vase," containment would be established by the DEFAULT-CONFAINMENT test. AssoclatJve links are not always hi-directional (vase ---> water, but water -/-> vase) and we need separate mechanisms to trace links with different orlentatlons. In our wine example, the COMMON-SOURCE test Is responsible for establishing containment, since wine is known to be OUTPUTFROM bottles but bottles are not always assumed to hold wine. Another inference made during the initial analysis finds the contents of the bottle mentioned in the first clause of the sentence. Thls expectation was set up by a demon called the CONTENTS-FINDER when the description of the open bottle, a SOURCE with unspecified output, was built. The demon causes a search of STM for an object which could De OUTPUT-FROM a bottle, and the token for this particular bottle is then marked as being a SOURCE of that oCject. The description of this particular bottle as a SOURCE of wine Is equivalent, in Object Primitive terms, to sayin~ that the bottle is a wine bottle. 3.3 CAUSAL VERIFICATION Once the requests trying, to fill slots not filled during the initial analysis nave been considered, the process which attempts to find causal connections between conceptualizations is activated, in this particular case, the analyzer has already indicated that the appropriate causal link is enablement. In ~eneral, however, the lexical information which caused the analyzer to build this causal llng is only an lndJcatlon that some enabling relation exists between the two actions (opening the bottle and pouring the wine). In fact, a long causal cnaJn may Oe required to connect the two acts, with an enaClement link being only one link in that chain. Furthermore, one cannot always rely on the text to indicate where causal relationships exist. The sentence "John opened the bottle and poured the wine." must ultimately be Interpreted as virtually synonymous with (1) above. The causal verification process first looks for a match between the conceptual representation of the enabled action (pouring the wine), and one of the potentially enabled acts derived earlier from the OP descrJptlon of the opened oottle. In this ex&mple, a match is immediately found between the action of pourln~ from the bottle and tne expected action generated from the CONNECTO~ descrJptlon of the open bottle (PTRANS FROM (INSIDE PART SEL~)). Other Object Primitives may also lead to expectations for actions, as we snail see later. When a match Js found, further conceptual checks are made on the enabled act to ensure that the action described "makes sense" with the particular objects currently fJlllng the slots In that acts description. When the match Is based on expectations derlved from the CONNECTO~ description of a container, the check Is a "contalner/contents check," which attempts to ensure that the object found in the container may reasonably be expected to be found there. The sentence "John opened the bottle so ne could pull out the elephant", is peculiar because we no associations exist wnlch would lead us to expect that elephants are ever found in bottles. The strangeness of this sentence can only be explained by the application of stereotypic knowledge about what we expect and don't expect to find inside a bottle. The contalner/contents cnecK is similar to the test described above In connection with the CONTAINER-FINDER demon. That is, the bottle is checked by both the DEFAULT-CONTAINMENT test and the COMMON-SOURCE test for known links relatin~ wlne and botles. When this check succeeds, the enable llnk has been verified by matcnlng an expected action, and by checking restrictions on related objects appearing intne slots of that action. The two CD acts that matched are then merged. The merging process accomplishes several tnJn~s. First, it completes the linking of tne causal chain between tne events described in the sentence. Second, it causes the filling of empty slots appearing in either the enabled act or In the enabling act, wherever one left a slot unspecified, and the other had that slot filled. These newly filled slots can propagate back along the causal chaln, as we shall see in the example of the next section. 55 3.~ CAUSAL CHAIN CONSTRUCTION In processin~ the sentence (~) John turned on the faucet so he could drinK. the causal chain cannot be built by a direct match with an expected event. Additional inferences must he made to complete the chain between the actions described in the sentence. The representation produced by the conceptual analyzer for "John turned on the faucet," Is *John* <~> *ooe ]J~ result Sfaucet e ~ (SOURCE with OUTPUT • ~water e) As with the bottle in the previous example, the description of the faucet as an active SOURCE of water is based on information found beneath the prototype for faucet, descrlbLnE the "on" state for that object. The principle e~pectatlon for SOURCE objects is that the person ~o "turned on" the SOURCE object wants to take control of (and ultimately make use of) whatever it is that Is output from that SOURCE. In CD, this is expressed by a template for an ATRANS (abstract transfer) of the output object, in this case, water. An important side effect of the construction of this expectation is that a token for some water is created, which can be used by a slot-filling Inference later. The representation for "he could drink" Is partially described ~y an INGEST with an unspecified liquid in the OBJECT slot. A special request to look for the missing liquid Is set up ~y a demon on the act INGEST, similar to the one on the PTRANS in the previous example. This request finds the token for water placed In the short term mamory ~nen the expectation that someone would ATRANS control of some water was generated. • faucet* ~ (SOURCE with OUTPUT = *watere) III ,. (possible enaOled action) HI ;i,1" "El ?HUMO ?HUMO <=> ATRANS <- ewatere < The causal chain completion that occurs for thls sentence is somewhat more complicated than It was for the previous case. As we nave seen, the only expectation set up by the SOURCE description of the faucet was for an ATRANS of water from the faucet. However, the action that is described here is an INGEST with Instrumental FTRANS. When the chain connector rails to find a match between the ATRANS and either the INGEST or its instrumental PTRANS, inference procedures are called to ~enerate any oOvlouS intermediate states that might connect these two acts. The first inference rule that is applied Is the resultatlve inference [8] that an ATRANS of an object TO someone results in a state where the object Is possessed by (POSS-BY) that person. Once this state has been ~enerated, it is matched a~alnst the INGEST in the same way the ATRANS was. When this match fails, no further forward inferences are ~enerated, since possession of water can lead to a wide ran~ e of new actions, no one of wnich is strongly expected. The backward chaining Inferencer Is then called to generate any ~nown preconditions for the act INGEST. The primary precondition (causative inference) for drinking is that the person doing the drinking has the liquid which ~e or she Is about to drink. This inferred enaolln~ state is then found to match the state (someone possesses water) Inferred from the expected ATRANS. The =arch completes the causal cnaln, causing the merging of the matched concepts. In this case, the mergln~ process causes the program to infer that it was procaoly John who took (AT~ANSed) the water from the faucet, in addition to turning it on. Had the sentence read "John turned on the faucet so .Mary could drlnK."p the program would infer that Mary took the water from the faucet. efaucete ~ (SOURCE with OUTPUT = ewatere) i enable ?HUMO ATRANS (- ewater • TO ?HUGO result °watere (POSS-B¥ ?HUHO) \ match? yes...lnfer ?HUMO • mJonnJ --~ewatere q~ (POSS-B~ mJohnO) bacgwar~J inference ,I~, enable L ..tJohnl <.> INGEST <- ?LIQUID ~ inst OJonne <=> PTRANS <- ?LIQUID One should note hers that the additional inferences used to complete the causal chain were very basic. The primary connections came directly from oOJect-specific expectatlons derived from the OOject Primitlve descriptions of the objects Involved. 4. C ~ It ta important to understand how OPUS differs from previous inference strateKies in natural language processing. To emphasize the original contributions of OPUS we will compare it to Rie~er's early work on inference and causal chain construction. Since Rie~er*s research is closely related to OPUS, a comparison of this system to Rieger's pro;rum will illustrate which aspects of OPUS are novel, and which aspects have been inherited. There is a ~reat deal of similarity between the types of inferences used In OPUS and those used by Rte~er in his description of Mt~qORX [8]. The causative and resultative inferences used to complete the causal chain in our last example came directly from that work. In addition, the demons used by OPUS are similar in flavor to the forward inferences and specification (slot-filling) inferences described by Rieger. Expectations are explicitly represented here as they were there, allowing them to be used In more then one way, as In the case where water is inferred to be the ~/Gg~Ted liquid solely from its presence in a previous expectation. There are, however, two ways in which OPUS departs from the inference strategies of Mb~OR¥ In significant ways. (1) On one the level of computer implementation there is a reorganization of process control in OPUS, and (2) on a theoretical level OPUS exploits an additional representatLonal system which allo~m inference generation to be more stronBly directed and controlled. In terms of implementation, OPUS integrates the processes of conceptual analysis and memoryohased inference prooeantnB. By using demons, inferences can be made during conceptual analysis, as the conceptual memory representations are ~enerated. This eliminates much of the need for an inference discrimination procedure aoting on completely pre-analyzed comoeptuaiizations produced Py a separate program module. In ,~tOR~, the processes of conceptual analysis and inference ~sneration were sharply modularized for reasons which were more pragmatic than theoretical. ~ough is Known about the interactions of analysis and inference at this time for us to approach the two as 56 concurrent processes which share control and contribute to each other In a very dynamic manner, ideas from KRL [3] were Instrumental In desJgnJn~ an integration of previously separate processing modules. On a more theoretical level, the Inference processes used for causal chain completion Jn OPUS are more highly constrained than was possible in Rle~er's system. In MEMORY, all possible inferences were made for each new conceptualization which was input to the program. Initially, input consisted of concepts coming from the parser. MEHORX then attempted to sake inferences from the conceptualizations which it itself had produced, repeating this cycle until no new inferences could be ~enerated. Causal chains were connected ~nen matches were found between inferred concepts and concepts already stored In Its ~emory. However, the Inference mecnanlsms used were in no way dlrected speclflcally to tne task of making connections between concepts found In its Input text. This lead to a comblnatorlal explosion in the number of inferences made from each new input. In OPUS, forward expectations are based on specific associations from the objects mentioned, and only when the objects in the text are described in a manner that indicates they are being used functionally. In addition, no more than one or two levels of forward or backward Inferences are made before the procedure Is exhausted, the system stops once a match Is made or It runs out of highly probable inferences to make. Thus, there is no chance for the ~Jnds of comblnatorlal explosion Rieger experlenced. By strengthenln~ the representation, and exploiting an integrated processing strategy, the comblnatorJal explosion problem can be eliminated. OPUS makes use of a well structured set of memory associations for objects, the Object Primitives, to encode Information which can be used in a variety of Rleger's qeneral inference classes. Because this Information is directly assoclated with memory representations for the objects, rather than being embodied Jn disconnected inference rules elsewhere, appropriate Inferences for the objects mentioned can be found directly. By using this extended repressntatlonai system, we can begin to examine the kinds of associative memory required to produce what appeared from Rieger's model to ~e the "tremendous amount of 'hidden' computation" necessary for the processing of any natm'al language text. REFERENC£S [11 Blrnbaum, L., and Selfrldge M. (1978). On Conceptual Analysis. (unpublished) Yale University, New Haven, CT. [2] Bobrow, D. G., Kaplan, R.M., Kay, M., Norman, D.A., Thompson, H., and Winograd, T. (1977). GUS, a frame driven dialog system, Artificial Intelligence, Vol. 8, No. 1. [31 Bobrow, D. G., and Wlnograd, T. (1977). An overview of KRL, a Knowledge representation language. Co=nltive Science 1, no. 1 [~] Charntak, E. (1972). Toward a model of childrens story comprehension. AITR-266, Artificial Intelligence Laboratory, MZT, Cambridge, HA. Lehnert, W.G. (1978). Representing physical objects in memory. Technical Report #111. Dept. of Computer Science, Yale University, New Haven, CT. C6] Minsky, M. (1975). A framework for representing Knowledge. In Winston, P. H., ed., The~1.~JZg~L~ of C~Dutar Vlslon, McGraw-Hill, New York, NY. C71 C81 C91 Norman, D. A., and Rumelhart, D. £., and the LNR Research Group (1975) ExDlorationslnCo=nltton. W. H. Freeman and Co., San granslsco. Rleger, C. (1975). Conceptual memory. Zn R. C. Schank, ed., Concectual Prdceasinm. North Holland, Amsterdam. Rlesbeok, C. and Schank, R. C. (1976). Comprehension by computer: expectation-baaed analysis of sentences in context. Technical Report #78. Dept. of Computer SCience, Yale University, New Haven, CT. [10] 3ohank, R.C., (1975). Conceptual Dependency Theory. in Schank, R. C.(ed.), Processinl. North Holland, Amsterdam. [111 5ohank, R. C. and Abelson, R. P. (1977). ~criots, Plans, ~oals, ~ Understandtn¢. Lawence Rrlba ,,m Press, Hlllsdale, NJ. 57
1979
13
H~ADING WITH A PURPOSE Michael Lebowitz Department of Computer Science, Yale University 1. iNTRODUCTION A newspaper story about terrorism, war, politics or football is not likely to be read in the same way as a gothic novel, college catalog or physics textbook. Similarly, tne process used to understand a casual conversation is unlikely to be the same as the process of understanding a biology lecture or TV situation comedy. One of the primary differences amongst these various types of comprehension is that the reader or listener will nave different goals in each case. The reasons a person nan for reading, or the goals he has when engaging in conversation wlll nave a strong affect on what he pays attention to, how deeply the input is processed, and what information is incorporated into memory. The computer model of understanding described nere addresses the problem of using a reader's purpose to assist in natural language understanding. This program, the Integrated Partial Parser (IPP) ~s designed to model the way people read newspaper stories in a robust, comprehensive, manner. IPP nan a set of interests, much as a human reader does. At the moment it concentrates on stories about International violence and terrorism. IPP contrasts sharply wlth many other tecnniques which have been used in parslng. Most models of language processing have had no purpose in reading. They pursue all inputs with the same dillgence and create the same type of representation for all stories. The key difference in IPP is that it maps lexlcal input into as high a level representation as possible, thereby performing the complete understanding process. Other approaches have invariably first tried to create a preliminary representation, often a strictly syntactic parse tree, in preparation for real understandlng. ~ince high-level, semantic representations are ultimately necessary for understanding, there is no obvious need for creating a preliminary syntactic representation, which can be a very difficult task. The isolation of the lexlcal level processing from more complete understanding processes makes it very difficult for hlgn level predictions to influence low-level processing, which is crucial in IPP. One very popular technique for creating a low-level representation of sentences has been the Augmented Transition NetworX (ATN). Parsers of this sort have been discussed by Woods [ 11] and Kaplan [SJ. An ATN-IiKe parser was developed by Winograd [10]. Most ATN parsers nave dealt primarily wltn syntax, occasionally checking a" few simple semantic properties of words. A more recent parser wnicn does an isolated syntactic parse was created by Marcus [4]. TOe important thing to note about all of these parsers is that they view syntactic parsing as a process to be done prior to real understanding. Even thougn systems of this sort at times make use of semantic information, they are driven by syntax. Their ~oal of developing a syntactic parse tree is not an explicit part of the purpcse of human understanding. the type of understanding done by IPP is in some sense a compromise between the very detailed understanding of This work was supported in part by the Advanced Research 8roJects A~enoy of the Department of Defense and monitored under the Office of Naval Research under contract N00014-75-C-1111. SAM Ill and P~M [9], both of which operated in conjunction with ELI, Riesbeck's parser [SJ, and the skimming, highly top-down, style of FRUMP [2]. EL1 was a semantically driven parser which maps English language sentences into the Conceptual Dependency [6] representations of their meanings, it made extensive use of the semantic properties of the words being processed, but interacted only slightly with the rest of the understanding processes it was a part of. it would pass o f f a completed Conceptual Dependency representation of each sentence to SAM or PAM which would try to incorporate it into an overall story representation. BOth these programs attempted to understand each sentence fully, SAM in terms of scripts, PAM in terms of plans and goals, before going onto the next sentence. (In [~] Scnank and Abelson describe scripts, plans and goals.) SAM and PAM model the way people might read a story i f they were expecting a detalied test on it, or the way a textbook might be read. £acn program's purpose was to get out of a story every piece of informatlon possible, fney treated each piece of every story as being equally important, ~nd requiring total understanding. Both of these programs are relatively fragile, requiring compiex dictionary entries for every word they might en0ounter, as well as extensive Knowledge of the appropriate scripts and plans. FRÙMP, in contrast to SAM and rAM, is a robust system whlcn attempts to extract the amount of information from a newspaper story which a person gets when ne skims rapidly. It does this by selecting a script to represent the story and then trying to fill in the various slots which are important to understand the story. Its purpose is simply to obtain enough information from a story to produce a meaningful summary. FRUMP is strongly top-down, and worries about incoming information from the story only insofar ~s it helps fill In the details of the script which it selected. 50 wnile FRUMP is robust, simply skipping over words it doesn't Know, it does miss interesting sections of stories which are not explained by its initial selection of a script. 18P attempts to model the way people normally read a newspaper story. Unlike SAM and PAH, it does not care if it gets every last plece of information out of a story. Dull, mundane information is gladly ignored. But, In contrast with FRUMP, it does not want to miss interesting parts of stories simply because tney do not mesh with initial expectations. It tries to create a representation which captures the important aspects of each story, but also tries to minimize extensive, unnecessary processing which does not contrlbute to the understanding of the story. Thus IFP's purpose is to decide wnat parts of a story, if any, are interesting (in IPP's case, that means related to terrorism), and incorporate the appropriate information into its memory. The concepts used to determine what is interesting are an extension of ideas presented by SctmnK [7]. 2. How l~ EOA~s The ultimate purpose of reading a newspaper story is to incorporate new information into memory. In order to do this, a number of different Kinds of Knowledge are needed. The understander must Know the meanings of words, llngulatic rules about now words combine into sentences, the conventions used in writing newspaper 5g stories, and, crucially, have extensive knowledge about the "real world." It is impossible to properly understand a story without applying already existing knowledge about the functioning of the world. This means the use of long-term memory cannot be fruitfully separated from other aspects of the natural understandin~ problem. The mana~emant of all this information by an understander is a critical problem In comprehension, since the application of all potentially relevant Knowledge all the time, would seriously degrade the understandin~ process, possibly to the point of halting It altogether. In our model of understanding, the role played by the interests of the understander Is to allow detailed processing to occur only on the parts of the story which are Important to overall understanding, thereby conserving processing resources. Central to any understandin~ system is the type of Knowledge structure used to represent stories. At the present time, IPP represents stories in terms of scripts similar to, although simpler than, those used by SAM and FRUMP. Most of the co--on events In IPP's area of Interest, terrorism, such as hiJaokings, kidnappings, and ambushes, are reasonanly stereotyped, although not necessarily wltn all the temporal sequencing present in the scripts SAM uses. ZPP also represents some events directly In Conceptual Dependency. The representations in IPP consist of two types of structures. There are the event structures themselves, generally scripts such as $KIDNAP and SAMBUSH, which form the backbone of the story representations, and tokens which fill the roles in the event structures. These tokens are basically the ?tcture Producers of [6], and represent the concepts underlying words such as "airliner," "machine-gun" and "Kidnapper." The final story representation can also Include links between event structures indicating causal, temporal and script-scene relationships. Due to IPP's limited repertoire of structures with which to represent events, it is currently unable to fully understand some stories which maXe sense only in terms of goals and plans, or other higher level representations. However, the understanding techniques used in IPP should be applicable to stories which require the use of such knowledge structures. This is a topic of current research. It Is worth noting that the form of a story's representation may depend on the purpose behind its being read. If the reader is only mildly Interested in the subject of the story, soriptal representation may well be adequate. On the other hand, for an story of great interest to the reader, additional effort may be expended to allow the goals and plans of the actors In the story to be gorked out. This Is generally more complex than simply representing a story in terms of stereotypical knowledge, and will only be attempted in cases of great interest. In order to achieve its purpose, ~PP does extensive "top-down" processing. That Is, It makes predlotions aOout what it is likely to see. These predictions range from low-level, syntactic predictions ("the next noun phrase will be the person kidnapped," for instance) to quite high-level, global predictions, ("expect to see demands made by the terrorist"). Significantly, the program only makes predictions about things it would like to Know. It doesn't mind skipping over unimportant parts of the text. The top-down predictions made by IPP are implemented in terms of requests, similar to those used by RiesbecK [5], which are basically Just test-action pairs. While such an implementation In theory allows arbitrary computations to ~e performed, the actions used in IPP are in fact quite limited. IPP requests can build an event structure, link event structures together, use a token to fill a role in an event structure, activate new requests or de-activate other active requests. The tests in IPP requests are also llmited in nature. They can look for certain types of events or tokens, check for words with a specified property in their dictionary entry, or even check for specific lexical items. The tests for lexical items are quite Important in Keeping IPP's processing efficient. One advantage is that very specific top-down predictions will often allow an otherwise very complex word disa~biguation process to be bypassed. For example, in a story about a hijacking, ZPP expects the word "carrying" to indicate that the passengers of the hijacked vehicle are to follow. So it never has to consider An any detail the meaning of "carrying." Many function words really nave no meaning by themselves, and the type of predictive processing used by IPP is crucial in handling them efficiently. Despite its top-down orientation, IPP does not ignore unexpected Input. Rather, If the new Information is interesting in itself the program will concentrate on it, makin~ new predictions In addition to, or instead of, the original ones. The proper integration of top-down and bottom-up processing allows the program to be efficient, and yet not miss interesting, unexpected information. The bottom-up processin~ of IPP is based around a ulassification of words that is done strictly on the basis of processing considerations. IPP Is interested in the traditional syntactic classifications only when they help determine how worqs should be processed. IPP's criteria for classification Involve the type of data structures words build, and when they should be processed. Words can build either of the main data structures used in XPP, events and tokens. The words bulldin~ events are usually verbs, but many syntactic nouns, such as • kidnapping," "riot," and "demonstration" also indicate events, and are handled in Just the same way as traditional verbs. Some words, such as =oat adjectives and adverbs, do not build structures but rather modify structures built by other words. These words are handled according to the type of structure they modify. The second criteria for classifying words - when they should be processed - is crucial to 1PP's operation. In order to model a rapid, normally paced reader, IPP attempts to avoid doin~ any processing which will not add to its overall understandin~ of a story. To do this, it classifies words into three groups - words which must be fully processed i--edlately, words which should be saved in short-ter~ memory, and then processed later, if ne,=essary, and words which should be skipped entirely. Words which must be processed immediately include interesting words building either event structures or tokens. "Gunmen," "kidnapped" and "exploded" are typical examples. These words give us the overall framework of a story, indicate how much effort should 0e devoted to further analysis, and, most importantly, generate the predictions w~loh allow later processing to proceed efficiently. The save and process later words are those which may become si~nifioant later, but are not obviously impor~cant when they are read. This class is quite substantial, Including many dull nouns and nearly all adjectives and adverbs. Zn a noun phrase sucn as "numerous Italian gunmen," there Is no point in processing tO any depth "numerous" or "Italian" until we ~now the word they modify is Important enou~n to be included in the final representation. Zn the cases where further procesein~ is necessary, IPP has the proper information to easily incorporate the saved words Into the story representation, and In the many cases 60 where the word is not important, no effort above saving the word is required. The processin~ strategy for these words is a Key to modei~n~ nom,al reading. The final class of words are those IPP skips altogether. Thls class includes very unlnterestln~ words whlch neither contribute processing clues, nor add to the story representation. Many function words, adjectives and verbs irrelevant to the domain at hand, and most pronouns fall into this category. These words can still be significant in cases where they are predlcted, but otherwise they are ignored by IPP and take no processln~ effort. In addition to the processing techniques mentioned so far, IPP makes use of several very pragmatic heuristics. These are particularly important in processlng noun ~roups properly. An example of the type of heuristic used is IPP's assumption that the first actor in a story tends to be important, and is worth extra processing effort. Other heurlst~cs can be seen in the example In section ~. IP~'s basic strategy is to make reasonable guesses about the appropriate representation as qulcKly as possible, facilitating later processln~ and fix things later if its ~uesses are prove to be wrong. ~. ~ DETAILED ~XAMPLE ~n order to illustrate bow IPP operates, and how its purpose affects its process|n{, an annotated run of IPP on a typical story, one taken from the Boston Globe is shown below. The text between the rows of stars has been added to explain the operation of IPP. Items beginning with a dollar sign, such as $rERRORISM, indicate scripts used by IPP to represent events. [PHOTO: Initiated Sun 24-Jun-79 3:36PM] @RUN IPP *(PARSE $1) Input: $1 (3 I~ 79) IRELAND (GUNMEN FIRING FROM AMBUSH SERIOUSLY WOUNDED AN 8-YEAR-OLD GIRL AS SHE WAS BEING TAKEN TO SCHOOL YESTERDAY AT STEWARrSTOWN COUNTY r~RONNE) Processing: GUNMEN : InterestinE token - GUNMEN Predictions - SHOOTING-WILL-OCCUR ROBBERY-SCRIPT TERRORISM-SCRIPT HIJACKING-SCRIPT lll**lem*llllll*l*mli,lll,l,lll,l,mllll,mlm,lllilmm,illl GUNMEN is marked In the dlotionary as inherently interesting. In humans this presumably occurs after a reader has noted that stories involving gunmen tend to be interesting. Since it is interesting, IPP fully processes GUNMEN, Knowing that it Is important to its purpose of extracting the significant content of the story, it builds a token to represent the GUNMEN and makes several predlctlons to facilitate later processing. There is a strong possibility that some verb conceptually equivalent to "shoot" will appear. There are also a set of scripts, including SROBBERY, STERRORISM and $HIJACK wnlcn are likely to appear, so IPP creates predictions looking for clues indicating that one of these scripts sOould be activated and used to represent the story. FIRING : Word satisfies prediction Prediction confirmed - SHOOTING-WILL-OCCUR Instantiated $SHOOT script 61 Predictions ° $SHOOf-HUL::-FINUER REASON-FOR-SHOOtING $SHoor-scEN~S tJeiIJ~i~Jf~mmQll~l|l#~Oilm~i~Ome|J|i~|~i~iQltllliJIDI FIHING satisfies the predlction for a "shoot" verb. Notice that tne prediction immediately dlsamblguates FIRING. Other senses of the word, such as "terminate employment" are never considered. Once IPP has confirmed an event, it builds a structure to represent i t , in this case the $SHOOr script and the token for GUNMEN is f i l l e d in ss the actor. Predictions are made trying to flnd the unknown roles of the script, VICTIM, in particular, the reason for the shooting, and any scenes of $SHOOT wnicn might be found. JJJiJJJJJiJiJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJlJJJJJJJJJJJJJ instantiated $ATTACK-P~RSON script Predictions - SAT rACK-PERSON-ROLE-FINDER. SATrACK-PERSON-SC~N~S Im,*|i@m|li,I@Wm~#mI~@Igm#wIiII#mmimmIII|@milIIillJgimR@ IPP does not consider the $SHOOT script to be a total explanation of a snootin~ event. It requires a representation wnlcn indicates the purpose of the various actors, in the absence of any other information, IPP assu~es people wno s h o o t are deliberately attacKin~ someone. So the SATTACK-PERSON script is Inferred, and $SHOOT attacned to i t as a scene. The SATTACK-PERSON representation allows IPP to make inferences which are relevant to any case of a person being attacked, not just snootin~s. IPP is still not able to Instantiate any of the high level scripts predicted by GUNMEN, since the SATTACK-PERSON script is associated with several of the~. FROM : Function word Predictions - FILL-FROM-SLOT Ji*JiJJeJ**JJJJiJJJJJJJlJJJJJJJJJ*JJJJ*JJJJ**J*JJJJJ*J*J FROM in s =ontext such as this normally indicates the location from which the attack was made is to follow, so IPP makes a prediction to that effect. However, since a word building a token does not follow, the prediction is deactivated. The fact that AMBUSH is syntactically a noun is not relevant, since iFP's prediction loo~s for a word which identifies a place. li*JiJJ*Jll**J*lJli|iJl*lii|llll#*J**JiJJiJJ**iJil*iiJJ* AMBUSH : Scene word Predictions - SAMBUSH-ROL~-FIND~R $AMBUSH-SCENKS Prediction confirmed - TERRORISM-SCRIPT Instantlated $TERRORISM script Predictions - TERRORIST-DEMANDS STERRORISM-ROLE-FINDER STERRORISM-SCENES COUNTER-MEASURES J*lJJJ*JiJJJJJJiJ*JJJJJJlJJJJJJJJJ*JJJi*JJ*JJJJ***JJJJ** IPP <nows the word AMBUSH to indicate an instance of the SAMBUSH scr|pt, and tn~t SAMBUSH can be a scene of $TERRORISM (i.e. it is an activity w~Ich can be construed as a terrorist act). This causes the prediction made by GUNMEN that $TERRORISM was a possible script tO be trlggerred. Even if AMBUSH had other meanings, or could be associated with other higher level scripts, the prediction would enable quicK, accurate identification and incorporation of the word's meaning into the story representation. IPP's purpose of associating the shooting with a nlgh level Knowledge structure which helps to expialn it, has been achieved. At this point in the processing an Instance of STERRORISM is constructed to serve as the top level representation of the story. The SAMBUSH and SATTACK-PERSON scripts are attached as scenes of STERRORISM. SgRIOUSLY : SKip and save ~OUNO£D : Word satisfies prediction Prediction confirmed - SWOUND-SCENE Predictions - SWOUND-ROLE-FINDER SWOUND-SCENES t~e~eoeeeleleeeeeeelloeelem|eee|eoeeeeaoalenlo|eleeoeeee SWOUND is a Known scene of $ATTACK-PERSON, representin~ a common outcome of an attack. It is instantlated and attached to $ATTACK-P~RSON. IPP infers that the actor of SWOUND is probably the same as for $A~ACK-PERSON, i.e. the GUNMgN. eleileleleeeelllllll|lllalllolsllieilllOlllelllel|oileil AN : SKip and save ~-YEAR-OLD : Skip and save GiRL : Normal token - GIRL Prediction confirmed - SWOUND-ROLE-FINDER-VICTIM eeee~eeeeeeme~eee~see~e~eee~m~ee~o~eeeeeeeeeee~aeeoee ~IRL Ouilds a toXen wnlch fllls t~e VICTIM role of the SWOUND script. Since IPP has inferred that the VICTIM of the ~ATrACK-PERSON and SSHOOr scripts are the same as the VICTIM of SWOUND, it also fills in those roles. Identifyin~ these roles is integral to IFP's purpose of understanding the story, since an attack on a person can only Oe properly understood if the victim is Known. As t~is person is important to the understandln~ of the story, IPP wants to acquire as much information as possible about net. Therefore, it looks baoK at the modifiers temporarily saved in short-term memory, 8-YEAR-OLD in this case, and uses them to modify the token ~uilt for GIRL. The age of the ~Irl is noted as eight years. This information could easily be crucial to appreciatin~ the interesting nature of the story. @EeE~eeBe@~oeeEeeeeeeeE~e~aEeeoaeEsasee|eaeeeeeeeeEssee AS : SKip SHE : SKip WAS : SKip and save BEING : Dull verb - skipped TAKEN : SKip TO : Function word SCHOOL : Normal token - SCHOOL Y~ST~RDAY : Normal token - YESTERDAY ~eee~ene~e~e~neeeeeaeeeeoeeeeeeeaeeeeeaeeeeeeeeeeeeeeee Nothin~ in this phrase is either inherently interesting or fulfills expectations made earlier in the processing of the story. So it is all prc,:essed very superficially, addin~ nothing to the final representation. It is important that IPP ma~es no attempt to dlsamOi~uate words such as TAKEN, an extremely complex process, since it knows none of the possible meanings will add significantly to its understanding. @illIIIIIIIIIIIIIIIIIIIIIIIllOIIlllIIIIIiilIIIIIIIIilIII AT : Function word STEWARTSTOWN : Skip and save COUNTY : SKip and save TYRONNE : Normal token - TYRONNE Prediction confirmed - $T~RRORISH-ROLE-FIHDER-PLACE emmtu~u~eeeeteHeJ~eee~t~e~eeeeatteet~aaeaaeaeeesewaa ST£WARTSTOWN COUNTY rYRONNE satisfies the ?redlotlon for the place where the terrorism took plane. IPP has inferred that all the scenes of the event took place at the same location. IPP expends effort in identifying this role, as location is crucial to the understandln~ of most storles. It is also important in the or~anizatlon of memories about stories. A incidence of terrorism in Northern ireland is understood differently from one in New York or Geneva. 62 Story Representation: ee MAIN [VENT ee SCRIPT $TERRORISM ACTOR GUNMEN PLACE $TEWARTSTOWN COUNTY TYRONNE TIHE ~ESTERDAY SCENES SCRIPT SAHBUSH ACTOR GUNMEN SCRIPT $ATTACK-PERSON ACTOR GUNMEN VICTIM 8 ~EAR OLD GIRL SCENES SCRIPT $SHOOT ACTOR GUNMEN VICTIM 8 XEAR OLD GIRL SCRIPT SWOUND ACTOR GUNMEN VICTIM 8 YEAR OLD GIRL EXTENT GREATERTHAN-nNORH e saesaeeeaeeeeseeeeeeeeeesseeesesesaeaeeoeeeeaeeeeeaeeeee IPP's final representation indicates that it has fulfilled its purpose in readimi the story. It has extracted roughly the same information as a person reading the story quickly. IPP has r~ognised an instance of terrorism oonststln8 of an ambush in whioh an eight year-old girl was wounded. That seems to be about all a person would normally remember from suoha story. eseeeeeeeeeae|eeeeeeesneeeeeaeeeeeeeeeeseeeeeeeaeeeeeese [PHOTO: Terminated Sun 24-jun-79 3:38~] As it pro~esses a story such as this one, IPF keeps track of how interesting it feels the story is. Novelty and relevance tend to increase interestlngness, while redundancy and irrelevance dec?ease it. For example, in the story shown moore, the faot that the victim of the shooting was an 8 year-old ingresses the interest of the story, and the the incident taMin~ place in Northern Ireland as opposed to a more unusual sate for terrorism decreases the interest. The story's interest Is used to determine how much effort should be expended in tryin~ to fill in more details of t~e story. If the level of lnterestingness decreases fax' enough, the program can stop processing the story, and look for a more interesting one, in the same way a person does when reading through a newspaper. ~. ANOTHER EXAMPLE The following example further illustrates the capabilities of IPP. In this example only IPP's final story representation is snows. This story was also taken from the Boston Globe. [PHOTO: Initiated Wed 27-Jun-79 I:OOPM] @RUN IPP °(PARSE S2) Input: S2 (6 3 79) GUATEMA~t (THE SON OF FORMER PRESIDENT EUGENIC KJELL LAUGERUD WAS SHOT DEAD B~ UNIDENTIFIED ASSAILANTS LAST WEEK AND A BOMB EXPLODED AT THE HOME OF A GOVERNMENT OFFICIAL ~LICE SAID) Story Representation: am MAIN EVENF ea SCRIPT STERRORISM ACTOR UNKNOWN ASSAILANTS SCENES SCRIPT $ATTACK-PERSON ACTOR UNKNOWN ASSAILANTS VICTIM SON OF PREVIOUS PRESIDENT EUGENIC KJELL LAUG~RUD SCENES SCRIPT $SHOOT ACTOR UNKNOWN ASSAILANTS VICTIM SON OF PREVIOUS PRESIDENT EUGENIC KJELL LAUGERUD SCRIPT SKill ACTOR UNKNOWN ASSAILANTS VICTIM SON OF PREVIOUS PRESIDENT EUGENIC KJELh LAUG~RUD SCRIPT SATTACK-PLAC£ ACTOR UNKNOWN ASSAILANTS PLACE HOME OF GOVERNMENT OFFICIAL SC~NdS SCRIPT $BOHB ACTOR UNKNONN ASSAILANTS PLACE HOME OF GOVERNMENT OFFICIAL [PHOTO: Terminated - Wed 27-Jun-79 I:09PM] Thls example maces several interesting points about the way IPP operates. Notice that 1PP has jumped to a conclusion about the story,, which, while plausible, could easily be wrong, it assumes that the actor of the SBOMB and SATTACK-PLACE scripts is the same as the actor of the STERRORISM script, which was in turn inferred from the actor of the sbootln~ incident. Tnls is plausible, as normally news stories are about a coherent set of events witn lo~Ical relations amongst them. So it is reasonable for a story to De about a series of related acts of terrorism, committed by the same person or ~roup, and tnat is what IPP assumes here even though that may not be correct. Uut this ~Ind of inference is exactly the Kind which IPP must make in order to do efficient top-down processln~, despite the possibility of errors. The otner interesting point about tnis example is the way some of iPP's quite pragmatic heuristics for processln~ give positive results. For instance, as mentioned earlier, the first actor mentioned has a stronz tendency to be important to the understandln~ of a story. In thls story that means that the modlfyin~ prepositional phrase "of former President Su~enlo Kjell Lau~erud" is analyzed and attached to the token built for "son," usually not an interesting word. Heur~stlcs of this sort ~ive IPP its power and robustness, rather than any single rule about language understandln~. 5. CONCLUSION IPP has been implemented on a DECsystem 20/50 at Yale. It currently has a vocabulary of more than I~00 words wnlcn is oelng continually Increased in an attempt to make the program an expert underst~der of newspaper stories scout terrorism. £t is also planned to add information about nigher level knowledge structures such as ~oals and plans and expand IPP's domain of interest. To date, IPP has successfully processed over 50 stories taken directly from various newspapers, many sight unseen. The difference between the powers of IPP and the syntactlcally driven parsers mentioned earller can cent be seen by the Kinds of sentences they handle. Syntax-0ased parsers generally deal with relatively simple, syntactically well-formed sentences. IPP handles sucn sentences, Out also accurately processes stories taken directly from newspapers, which often involve extremely convoluted syntax, and in many cases are not grammatical at all. Sentences of this type are difficult, if not impossible for parsers relyln~ on syntax. IPP is sole to process news stories quickly, on the order of 2 CPU seconds, and when done, it has achieved a complete understandln~ of the story, not Just a syntactic parse. As shown in tne examples above, interest can provide a purpose for reading newspaper stories. In other situations, other factors might provide the purpose. But the purpose is never simply to create a representation - especially a representation with no semantic content, such as a syntax tree. This is not to say syntax is not important, obviously in many circumstances it provides crucial information, but it should not drive the understanding process. Preliminary representations are needed only if they assist in the reader's ultimate purpose bulldln~ an appropriate, high-level representation which can be incorporated with already existing Knowledge. The results achieved by IPP indicate that parsing directly into high-level knowledge structures is possible, and in many situations may well be more practical than first doin~ a low-level parse. Its integrated approacn allows IPP to make use of all the various kinds of knowledge which people use when understandtn~ a story. References [1] Cullin&ford, R. ( 1 9 7 8 ) Script application: Computer understanding of newspaper stories. Research Report 116, Department of Computer Science, Yale University. [2] DeJon~, G.F. (19/9) Skimming stories in real time: An experiment in integrated understanding. Research Report 158, Department of Computer Science, Yale University. [3] Kaplan, R.M. (1975) On process models for sentence analysis, in D.A. Norman and D. E. R~elhart, ads., Explorations in ~oanition. W. H. Freeman and Company, San Francisco. [~] Marcus, M.P. (1979) A Theory of Syntactic Recognition for Natural Language, in P H . Winston and R.H. Brown (eds.), Artificial IntellJ~ence: an ,~ Presnectlve, HIT Press, Cambridge, Massachusetts. [5] Riesbeck, C. K. (1975) Conceptual analysis. In R.C. ScnanK (ed.),. ~ Information Processing. North Holland, Amsterdam. [6] Scnank, R.C. (1975) Conceotual Information Processln¢. North Holland, Amsterdam. [7] Scnank, R. C. (1978) Interestlngness: Controlling inferences. Research Report I~5, Department of Computer Science, Yale University. [8] Scbank, R. C. and Abelson, R. P. (1977) Scrints. Plans, Goals and Understanding. Lawrence grlbaum Associates, Rlllsdale, New Jersey. [9] dllensky, R. (1978) Understanding goal-based stories. Research Report I~0, Department of Computer Science, Yale University. [10] Wtnograd, T. (1972) Understandin~ Natural Lan:uafe. Academic Press, New York. [11] ~oods, W. A. (1970) Transition network grammars for natural language analysis. ~ o f the ACH. Vol. 13, p 591. 63
1979
14
DISCOURSE: CODES AND CLUES IN CONTEXTS Jane J. Robinson Artificial Intelligence Center SRI International, Menlo Park, California Some of the meaning of a discourse is encoded in its linguistic forms. Thls is the truth-conditional meaning of the propositions those forms express and entail. Some of the meaning is suggested (or 'implicated', as Grice would say) by the fact that the encooer expresses just those propositions in just those linguistic forms in just the given contexts [2]. The first klnd of meaning is usually labeled 'semantics'; it is decoded. The second Is usually labeled 'pragmatlcs'; it is inferred from clues provided by code and context. Both kinds of meaning are related to syntax in ways that we are coming to understand better as work continues in analyzing language and constructing processing models for communlcatlon. We are also coming to a better understanding of the relationship between the perceptual and conceptual structures that organize human experience and make it encodabla in words. (Cf. [I], [4].) I see thls progress in understanding not as the result of a revolution in the paradigm of computational linguistics in which one approach to natural language processing is abandoned for another, but rather as an expansion of our ideas of what both language and computers can do. We have been able to incorporate what we learned earlier in the game in a broader approach to more significant tasks. Certalnly within the last twenty years, the discipline of computational linguistics has expanded its view of its object of concern. Twenty years ago, that vlew was focussed on a central aspect of language, language as code [3]. The paradigmatic task of our dlsclpllne then was to transform a message encoded in one language into the same message encoded In another, using dictionaries and syntactic rules. (Originally, the task was not to translate but to transform the input as an ald to human translators.) Colncldentally, those were the days of batch processing and the typical inputs were scientific texts -- written monologues that existed as completed, static discourses before processlnK began. Then came interactive processing, brlnglng with It the opportunity for what is now called 'dialogue' between user and machine. At the same time, and perhaps not wholly colnoldentally, another aspect of language became salient for computational linguistics -- the aspect of language as behavior, with two or more people using the code to engage in purposeful ~ communication. The inputs now include discourse in which the amount of code to be interpreted continues to grow as participants in dialogue interact, and their interactions become part of the contexts for on-golng, dynamic interpretation. The paradigmatic task now Is to simulate in non- trivial ways the procedures by which people reach conclusions about what is in each other's minds. Performing this task still requires processing language as code, but it also requires analyzing the code in a context, to identify clues to the pragmatic meaning of its use. One way of representing thls enlarged task to conceive of it as requiring three concentric klnds of knowledge: a intrallngulatlc knowledge, or knowledge of the code • interllngulstlc knowledge, or knowledge of linguistic behavior • extrnllngulstlc knowledge, or knowledge of the perceptual and conceptual structures that language users have, the things they attend to and the goals they pursue The papers we will hear today range over techniques for identifylr~, representing and applying the various kinds of knowledge for the processing of discourse. McKeown exploits intrallngulstic knowledge for extralingulstic purposes. When the goal of a request for new information is not uniquely identlfiabte, she proposes to use syntactic transformations of the code of the request to clarify its ambiguities and ensure that its goal is subsequently understood. Shanon is also concerned with appropriateness of answers, and reports an investigation of the extralinguistic conceptual structuring of space that affects the pragmatic rules people follow in furnishing appropriate information in response to questions about where things are. Sidner identifies various kinds of intrallnguistic clues a discourse provides that indicate what entities occupy the focus of attention of discourse paticlpants as discourse proceeds, and the use of focusing (an extrallngulstlc prc~ess) tq control the inferences made in identifying the referents of pronominal anaphora. Levin and Hutchlnson analyze the clues in reports of spatial reasoning that lead to identification of the point of vlew of the speaker towards the entities talkeO about. Llke Sldner, they use syntactic clues and tlke Shanon, they seek to identify the conceptual structures that underlie behavior. Code and behavior interact with intentions in ways that are still mysterious but clearly important. The last two papers stress the fact that using language is intentional behavior and that understanding the purposes a discourse serves is a necessary part of understanding the discourse itself. Mann claims that dialogues are comprehensible only because participants provide clues to each other that make available knowledge of the goals being pursued. Alien and Perrault note that intention pervades all three layers of discourse, pointing out that, in order to be successful, a speaker must intend that the hearer recognize his intentions and infer his goals, but that these intentions are not signaled in any simple way in the code. In all of these papers, language is viewed as providing both codes for and clues to meaning, so that when it is used in discourse, Its forms can be decoded and their import can be grasped. As language users, we know that we can know, to a surprising extent, what someone else means for us to know. ~e also sometimes know that we don't know what someone else means rot us to know. As computational linguists, we are ~rying to figure out precisely how we know such things. REFERENCE3 [I] Chafe, W.L. 1977. Creativity in Verbalization and Its Implications for the Nature of Stored Knowledge. In: Freedle, R.O. (ed)., <<Discourse Production alld Comprehension>, Voi. I, pp. 41-55. Ablex: Nor,wood, New Je r say. [2] Grlce, P.H. 1975. Logic and Conversation. In: Davldson, D. and Har~n, G. (eds.), <<The Logic of Gr-mmAr>. Dlcker~on: Enclno, California [3] Halitday, M.A.K. 1977. Languor as Code and Language as Rahavlour. In: Lamb, S. and Makkai, A. (eds.), <¢Semlotlcs of Culture and Lan~p~age>. [~] Mlller, G.A. and Johnson-Lalrd, P.N. 1976. <<Lang1~e and Perception>. Harvard University Press: Cambridge, Massachusetts 65
1979
15
Paraphrasing Using Given and New Information in a Question-Answer System Kathleen R. McKeown Department of Computer and Information Science The Moore School University of Pennsylvania, Philadelphia, Pa. 19104 ABSTRACT: The design and implementation of a paraphrase component for a natural language questlon-answer system (CO-OP) is presented. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar used by the paraphraser to generate questions. I • INTRO~ION In a natural language interface to a database query system, a paraphraser can be used to ensure that the system has correctly understood the user. Such a paraphraser has been developed as part of the CO-OP system [ KAPLAN 79]. In CO-OP, an internal representation of the user's question is passed to the paraphraser which then generates a new version of the question for the user. Upon seeing the paraphrase, the user has the option of rephrasing her/his question before the system attempts to answer it. Thus, if the question was not interpreted correctly, the error can be caught before a possibly lengthy search of the database is initiated. Furthermore, the user is assured that the answer s/he receives is an answer to the question asked and not to a deviant version of it. The idea of using a paraphraser in the above way is not new. To date, other systems have used canned templates to form paraphrases, filling in empty slots in the pattern with information from the user's question [WALTZ 78; CODD 78]. In CO-OP, a transformational grammar is used to generate the paraphrase from an internal representation of the question. Moreover, the CO-OP paraphraser generates a question that differs in a meaningful way from the original question. It makes use of a distinction between given and new information to indicate to the user the existential presuppositions made In her/his question. II. OVERVIEW OF THE CO-OP S~"3-rEM The CO-OP system is aimed at infrequent users of database query systems. These casual users are likely to be unfamiliar with computer systems and unwilling to invest the time needed to learn a formal query language. Being able to converse naturally in English enables such persons to tap the information available in a database. In order to allow the question-answer process to proceed naturally, CO-OP follows some of the "co-operative principles" of conversation [GRICE 75]. In particular, the system attempts to find meaningful answers to failed questions by addressing any incorrect assumptions the questioner may have made in her/his question. When the direct response to a question would be simply "no" or "none", CO-OP gives a more informative response by correcting the questloner's mistaken asstm~tlons. The false assumptions that CO-OP corrects are the existential presuppositions of the question.* Since these presuppositions can he computed from the surface structure of the question, a large store of semantic knowledge for inferenclng purposes is not needed. In *For example, in the question "Which users work on projects sponsored by NASA?', the speaker makes the existential presupposition that there are projects mpommred by NASA. 67 fact, a lexicon and database schema are the only items which contain domain-specific information. Consequently, the CO-OP system is a portable one; a change of database requires that only these two knowledge sources be modified. III. THE CO-OP PARAP~%~SER CO-OP's paraphraser provides the only means of error-checking for the casual user. If the ¢,ser is familiar with the system, s/he can ask to have the intermediate results printed, in which case the parser's output and the formal database query will be shown. The naive user however, is unlikely to understand these results. It is for this reason that the paraphraser was designed to respond in English. The use of English to paraphrase queries creates several problems. The first is that natural language is inherently ambiguous. A paraphrase must clarify the system's interpretation of possible ambiguous phrases in the question without introducing additional ambiguity. One particular type of ambiguity that a paraphraser must address is caused by the linear nature of sentences. A modifying relative clause, for example, frequently cannot be placed directly after the noun phrase it modifies. In such cases, the semantics of the sentence may indicate the correct choice of modified noun phrase, but occasionally,, the sentence may be genuinely ambiguouS. For example, question (A) below has two interpretations, both equally plausible. The speaker could be referring to books dating from the '~0s or to computers dating from the '60s. (A) Which students read books on computers dating from the '60s? A second problem in paraphrasing English queries is the possibility of generating the exact question that was originally asked. If a grammar were developed to simply generate English from an underlying representation of the question this possibility could be realized. Instead, a method must be devised which can determine how the phrasing should differ from the original. The CO-OF paraphraser addresses both the problem of ambiguity and the rephrasing of the question. It makes the system's interpretation of the question explicit by breaking down the clauses of the question and reordering them dependent upon their function in the sentence. Thus, questlon (A) above will result in ei ther paraphrase (B) or (C), reflecting the interpretation the system has chosen. (B) Assuming that there are books on computers (those computers date from the '60s), which students read those books? (C) Assuming that there are hooks on computers (those hooks date from the '~Os), which students read those books? ~1~e method adopted guarantees that the paraphrase will differ from the original except in cases where no relative clauses or prepositional phrases were used. It was formulated on the basis of a distinction between given and new information and indicates to the user the presuppositions s/he has made in the question (in the "assuming that" clause), while focussing her/his attention on the attributes of the class s/he is interested in. IV. LINGUISTIC 8ACI~ROUND As mentioned earlier, the lexicon and the database are the sole sources of world knowledqe for CO-OP. While this design increases CO-OP's portability, it means that little semantic information is available for the paraphraser's use. Contextual information is also limlte~ since no running history or context is maintained for a user session in the current version. The input the paraphraser receives from the parser is basically a syntactic parse tree of the question. Using this information, the paraphraser must reconstruct the question to obtain a phrasing different from the original. The following question must therefore be addressed: What reasons are there for choosing one syntactic form of expression over another? Some linguists maintain that word order is affected by functional roles elements play within the sentence.* Terminology used to describe the t~pes of roles that can occur varies widely. Some of the dlstinctons that have been described include given/new, topic/comment, theme/theme, and presupposition/focus. Definitions of these terms however, are not consistent (for example, see [PRINCE ?9] for a discussion of various usages of "given/new" ). Nevertheless, one influence on expression does appear to be the interaction of sentence content and the beliefs of the speaker concerning the knowledge of the listener. Some elements in the sentence function in conveying information which the speaker assumes is present in the "consciousness = of the listener [CHAFE ?fi]. This information is said to be contextually dependent, either by virtue of its presence in the preceding discourse or because it is part of the shared world knowledge of the dialog participants. In a question-answer sys~, shared world knowledge refers to information which the speaker assumes is present in the database. Information functioning in the role just described has been termed "given". "New" labels all information in the sentence which is presented as not retrievable from context. In the declarative, elements functioning in asserting information What the listener is presumed not to know are called new. In the question, elements funci:ioning in conveying what the s~eaker wants to know (i.e.- what s/he doesn't know) represent information which the speaker presumes the listener is not already aware of. Flrbas identifies additional functions in the question. Of these, (ii) is used here .to aug~mt the interpretation of new information. He says: "(i) it indicates the want of knowledge on the part of the inquirer and appeals to the informant to satisfy this want. (ii) [a] it i,~erts knowledge to the informant in that it informs him what the inquirer is interested in (what is on her/his mind) and * Some other influences on syntactic expression are discussed in [MORGAN and GRE~ 73]. They surest that stylistic reasons, in addition to some of the functions discussed here, determine when different syntactic constructions are to be used. They point out, for example, that the passive tense is often used in academic prose to avoid identification of agent and to lend a scientific flavor to the text. [b] from what particular angle the intimated want of knowledge is to be satisfied." [FIRBAS 74; [}.31] Although word order vis-a-vis these and related distinctions has been discussed in light of the declarative sentence, less has been said about the interrogative form. Hellida7 [HALLII14Y 67] and Krlzkova* are among the few to have analyzed the question. Despite the fact that they arrive at different conclusions**, the two follow similar lines of reasoning. Krlzkova argues that both the wh-item of the wh-question and the finite verb (e.g. - "do" or "be') of the yes/no question point to the new information to be disclosed in the response. These elements she claims, ere the only unknowns to the questioner. Helllda7, in discussing the yes/no question, also argues ~at the finite verb is the only unknot. The polarity of the text is in question and the finite element indicates this. In this paper the interpretetion of the unknown elements in the question as defined by Krizkova and Helllday is followed. The wh-items, in defining the questioner's lack of knowledge, act as new information. Firhas' analysis of the functions in questions is used to further elucidate the role of new information in questions. The re~aining elements are given information. They represent information assumed by the questioner to be true of the database domain. This lapeling of information within the question will allow the construction of a natural paraphrase, avoiding ambiquity. V. ~ ~ Following the analysis described above, the CO-OP paraphrassr breaks down questions into given and new information. ~tore s~ectfically, an input question is divided into three parts, of which (2) and (3) form the new information. (1) given information (2) Function ii (a] from Firhas above (3) Function il (b] from Firhas above In terms of the question components, (2) comprises the question with no subclauses as it defines the lack of knowledge for the hearer. Part (3) comprises the direct and indirect modifiers of the interrogative words as they indicate the angle from which the question Was asked. They define the attributes of the missing information for the hearer. Part (1) is fomed from the remaining clauses. As an exile, consider question (D): (D) which division of the computing facility works on projects using oceanography research? Following the outline above, part (2) of the paraI~rase will be the question minus subclauses: ~ich division works on proj~-te?', part (3), the modifiers of the interrogative words, will be "of the computing facility" which modifies =which division'. The remaining clause , Summary by (FZRB~ 74] of the untranslated article =The Interrogative Sentence and Some Problems of the So-called Functional Sentence Perspective (Contextual O~anizatlon of the Sentence], ~ass rec 4, IS,;8. ** It ~ould be noted that Halllda 7 and Krizkova discuss unknowns in the question in order to define the theme end them of a question. Although they agree the unkno~ for the questioner, they disagree about whlch elements functlon as ~ and whlch function as theme. A full discussion of their analysis and conclusions is given in [~XEO~ 79]. 68 "projects using oceanography research" is considered given information. The three parts can then be assembled into a natural sequence: (E) Assuming that there are projects using oceanography research, which division works on those projects? Look for a division of the computing facility.* In question (D), information belonging to each of the three categories occurred in the question. If one of these types of information is missing, the question will be presented minus the initial or concluding clauses. Only part (2) of the paraphrase will invariably occur. If more than one clause occurs in a particular category, the question will be furthered splintered. Additional given informat ion is parenthesized following the "assuming that ..." clause. Example (F) below illustrates the paraphrase for a question containing several clauses of given information and no clauses defining specific attributes of the missing information. Clauses containing information characterized by category (3) will be presented as separate sentences following the stripped-down question. (G) below demonstrates a paraphrase containing more than one clause of this type of information. (F) Q: Which users work on projects in oceanography that are sponsored by NASA? P: Asst~mlng that there are projects in oceanography (those projects are sponsored by NASA), which users work on those projects? (G) Q: Which programmers in superdlvislon 5000 from the ASD group are advised by Thomas Wlrth? P: Which programmers are advised by Thomas Wlrth? Look for programmers in superdlvlslon 5000. The programmers must be from the ~.gD group. VI. IMPLEMENTATION OVERVIEW The paraphraser's first step in processing is to build a tree structure from the representation it is given. The tree is then divided into three separate trees reflecting the division of given and new information In the question. The design of the tree allows for a simple set of rules which flatten the tree. The final stage of processing in the paraphraser is translation. In the translation phase, labels In the parser's representation are translated into their corresponding words. During this process, necessary transformations of the grammar are performed upon the string. Several aspects of the implementation will not be discussed here, but a description can be found in [MCKEOWN 791. The method used by the paraphraser to handle conjunction, disjunction, and limited quantification is one of these. A second function of the paraphraser is also described In [MCKEOWN 79]. The set of procedures used to paraphrase the user's query can also be used to generate an English version of the parser's output. If the tree is not divided into given and new information, the flattening and transfor,mtlonal rules can be applied to produce a question that is not in the three-part form. rn CO-OP, generation is used to produce corrections of the user's mistaken presupposi tions. * This example, as well as all sample questions and paraphrases that follow, were, =aken from actual sessions with the paraphraser. Question (A)mad its possible paraphcases (B) and (C) are the only examples that were not run on the paraphraser. A. THE PHRA.qE STRUCTURE TREE In its initial processing, the paraphraser transforms the parser's representation into one that is more convenient for generation purposes. The resultant structure is a tree that highlights certain syntactic features of the question. This initial processing gives the paraphraser some independence from the CO-OP system. Were the parser's representation changed or the component moved to a new system, only the initial processing phase need be modified. The paraphraser's phrase structure tree uses the main verb of the question as the root node of the tree. 1"Ne subject of the main verb is the root node of the left subtree, the object (if there is one) the root node of the right subtree. In the current system, the use of binary relations in the parser's representation (see [KAPLAN 79] for a description of Meta Query Language) creates the illusion that every verb or preposition has a subject and object. Tne paraphraser's tree does allow for the representation of other constructions should the incccning language use them. Each of the subtrees represents other clauses in the question. Both the subject and the object of the main verb will have a subtree for each other clause it participates in. If a noun in one of these clauses also participates in another clause in the sentence, it will have subtrees too. As an example, consider the question: "~Fnlch active users advised by Thomas Wirth work on projects in area 3?". The phrase structure tree used in the paraphraser is shown in Figure I. Since "work" is the main verb, it will be the root node of the tree. "users" is root of the left subtree, "projects" of the right. Each noun participates in one other clause and therefore has one subtree. Note that the adjective "active" does not appear as part of the tree structure. Instead, it is closely bound to the noun it modifies and is treated as a property of the noun. +7\ users projects advised by/ ~ in Thomas wlrth area object object Figure i B. DIVIDING THE TREE Tne constructed tree is computatlonslly suited for the three-part paraphrase. The tree is flattened after it has been divided into subtrees containing given information and the two types of new information. The splitting of the tree is accomplished by first extracting the topmost smallest portion of the tree containing the wh-item. At the very least, this will include the root node plus the left and right subtree root nodes. This portion of the tree is the stripped down question. The clauses ~hlch define the particular aspect frora which the question is asked are found by searching the left and right subtrees for the wh-ltem or questioned noun. The subtree whose root node is the wh-item contains these clauses. Note that this may be the entire left or right subtree or may only be a subtree of one of these. The remainder of the tree represents given information. Figure 2 illustrates thls division for the previous example. 69 i?fo tion O: Which acl:ive users advised by Thomas Wtrth work on projects in area 3? P: Assuming that there are projects in area 3, which active users work on those projects? Look for users advised by Thomas wirth. Figure 2 C° FLATT~ING If the structure of the phrase structure tree is as in Figure 3, with A the left subtree and B the right, then the following rules define the flattening process: TREE-> A R B SUBTREE -> R' A* B' In other words, each of the subtrses will be linearized by doing a pre-order treversal of that subtree. As a node in a subtres has three pieces of information associated with it, one more rule is required to expand a node. A node consists of: (1) arc-lal~l (2) ast-lahel (3) subject/object where arc-label is the label of the verb or preposition used in the parse tree and set-label the label of a noun phrase. Subject/object indicates whether the sub-node noun phrase functions as subject or object in the clause; it is used by the subject-aux transformation and does not apply to the expansion rule. The following rule expands a node: NODE -> ARC-tABEL SET-LABEL TWo transformations are applied during the flattening process. They are wh-frontlng and subject-aux inversion. They are further described in the section on transformations. Tree: a Subtree: B' Figure 3 The tree of given information is flattened first. It is part of the left or right subtree of the phrase structure tree and therefore is flattened by a pre-order traversal. It is during the flattening stage that the words "Assuming that there [be] ... • are inserted to introduce the clause of given information. "Be" will agree with the subject of the clause. If there is more than one clause, parentheses are inserted around the additional ones. The tree representing the stripped doom question is flattened next. It is followed by the modifiers of the questioned no~1. The phrase "Look for" is inserted before the first clause of modifiers. 70 D. TRANSFORMATIONS The graewar used in the paraphraser is a transformational one. In addition to the basic flattening rules described above, the following transformations are used: ~an~ -fr°nting ation ~.do-support (~subject-aux inversion ~f flx-hopping kcontrsction has deletion The curved lines indicate the ordering restrictions. There are two connected groups of transformations. If wh-fronting applies, then so will do-support, subJect-aux inversion, and affix-hopplng. The second group of transformations is invoked through the application of negation. It includes do-support, contraction, and affix-hopping. Has-deletion is not affected b 7 the absence or presence of other tranafomations. A description of the transformation rules follo~. The rules used here are based on analyses described by [~IAN and ~ 75] and analyses described by [CULLICOV~ 76]. The rule for wh-fronting is specified as follows, where SD abbreviates structural description and SC, structural change: SD: X - NP - Y i 2 3 SC: 2+i 0 3 condition: 2 dominates wh The first step in the implementation of wh-fronting is a search of the tree for the wh-item. A slightly different approach is used for paraphrasing than is used for generation. The difference occurs because in the original question, the NP to be fronted may be the head noun of some relative clauses or prepositional phrases. When generating, these clauses must be fronted along with the heed noun. Since the clauses of the original que~ion are broken down for the paraphrase, it will never he the case when pars~hrssing that the NP to be fronted also dominates relative clauses or prepositional phrases. For this reason, when paraphrase mode is used, the applicability of wh-fronting is tasted for and is applied in the flattening process of the stripped down question. If it applies, only one word need be moved to the initial position. When generation is being done, the applicability of wh-fronting is tested for immediately before flattening. If the transformation applies, the tree is split. The subtree of which the wh-itmn is the root is flattened separstely from the remair~er of the tree and is attached in fronted position to the string resulting from flattening the other part. After wh-fronting has been appl led, do-support is invoked. In CO-OP, the underlying representation of the q~aation does not contain mudals or auxiliary verbs. Thus, fronting the wh-item necessitates supplying an auxiliary. The following rule is used for do-support: SD: NP - NP - tense - V - X 1 2 3 4 SC= 1 do+2 3 4 condition= 1 dominates wh SubJect-aux inversion is activated immediately afterwards. Aqaln, if wh-frontlng applied, subject-aux inversion will apply also. The rule is= SD: NP - NP - AUX - X I 2 3 4 SC: I 3+2 0 4 condition: i dominates wh Affix-hopping follows subject-aux inversion. In the Paraphraser it is a combination of what is commonly thought of as afflx-hopplng and number-agreement. Tense and number are attributes of all verbs in the Parser's representation. When an auxiliary is generated, the tense and n~nber are "hopped" from the verb to the auxiliary. Formally: SD: X - AUX - Y - tense-nua~-V - Z i 2 3 4 5 6 SC: 1 2+4 3 0 5 6 Some transformational analyses propose that wh-frontlng and subJect-aux inversion aPPly to the relative clause as well as the question. In the CO-OP Paraphraser, the heed-noun is properly positioned by the flattening process and wh-frontlng need not be used. Subject-aux inversion however, may be applicable. In cases where the head noun of the clause is not its subject, subject-aux inversion results in the proper order. • The rule for negation is tested during the translation phase of execution. It has been formalized as: SD: X - tense-V - NP - Y 1 2 3 4 SC: i 2+no 3 4 condition: 3 marked as negative In Ehe CO-OP representation, an indication of negation is carried on the object of a binary relation (see [KAPLAN 79] ). When generating an English representation of the question, it is possible in some cases to express negation as modification of the noun (see question (H) below). In all cases however, negation can be indicated as Part of the verb (see version (I) of question (H)). Therefore, when the object is marked as negative, the Paraphraser moves the negation to heroine Part of the verbal element. (R) which students have no advisors? (I) Which students don't have advisors? In English, the negative marker is attached to the auxiliary of the verbal element and therefore, as was the case for questions, an auxiliary must be generated. Do-support is used. The rule used for do-support after negation differs from the one used after wh-frontlng. They are presented this way for clarity, but could have been combined into one rule. SD: X - tense-V-no - Y 1 2 3 SC: 1 do+2 3 Affix-hopping, as described above, hops the tense, number, and negation from the verb to the auxiliary verb. The cycle of transformations invoked thru application of negation is completed with the contraction transformation. The statement of the contraction transformation Is" SD: X - do+tense -no - Y 1 2 3 4 SC: I #2+n* t# 0 4 where # indicates that the result must he treated as a unit for further transformations. VII. CONCLUSIONS The paraphraser described here is a sylltactic one. while this work has examined the reasons for different forme .)f expression, additions must be made in the area of semantics. The substitution of synonyms, phrases, or idioms for portions or all of the question requires an examination of the effect of context on word meaning and of the intentions of the speaker on word or phrase choice. The lack of a rich semantic base and contextual information dictated the syntactic approach used here, but the paraphraser can be extended once a wider range of information becomes available. The CO-OP paraphraser has been designed to be domain-independent and thus a change of the database requires no charges in the paraphraser. Paraphrasers which use the template form hbwever, will require such changes. This is because the templates or patterns, which constitute the type of question that can be asked, are necessarily dependent on the domain. For different databases, a different set of templates must be used. The CO-OP Paraphraser also differs from other systems in that it generates the question using a transformational grammar of questions. It addresses two specific problems involved in generating paraphrases-" I. ambiguity in determining which noun phrases a relative clause modifies 2. the production of a question that differs from the user' s These goals have been achieved for questions using relative clauses through the application of a theory of given and new information to the generation process. ~ E ~ N T S Thls work was partially supported by an IBM fellowship and NSF grant MCS78-08401. I would like to thank Dr. Aravind K. Joshi and Dr. Bonnie Webbar for their invaluable comments on the style and content of this paper. REF~ENCES I. [A~4AJIAN and HENY 75]. Akmajian, A. and Heny, F., An Introduction to the Principles of Transformational S-~tax, ~IT Press__l~/~. 2. [CHAFE 77]. Chafe, W.L., "Glvenness, Contrastiveness, Definiteness, Subjects, Topics, and Points of View", Subj~t and Topic (ed. C.N. Li), Academic Press, 1977. 3. [COOl) 78]. todd, E. F., et el., Rendezvous Version i- An Experimental English-language Quer 7 Formu-~ for Casual Users of Relational Data Bases, IE~ Researc~'~eport"~'~2!Y4"~'~9~7), IBN Resear-'r~ La"~-'~ory, San Jose, Ca., 1978. 4. [CULLICOVER 76]. Culllcover, P.W., Syntax, Academic Press, N. Y., 1976. 5. [DANES 74]. Danes, F. (ed.), Papers on Functional Sentenc e Perspective r Academia, Prague, ~7~ 6. [FIRBAS R6]. Firhas, Jan, "On Defining the Theme in Functional Sentence Analysis", Travaux Lin~uistigues de Prague i, Univ. of Alabama Pres~. 7. [FIRBAS 74]. Firbas,Jan, "Some Aspects of the Czechoslovak Approach to Problems of Functional Sentence Perspective", Papers on Functional Sentence Perspective, Academia, Prague, ~]7~. 8. [GOLDEN 75]. Goldman, N., "Conceptual Generation', Conceptual Information Proceesir~ (R. C. Schank), North-Holland Publishing Co., Amsterdam, 1975. 9. [GRICE 75]. Grlce, H. P., "Logic and Conversation", in ~tax and Sea~mtics,~ Acts, Vol. 3, (P. Cole and J. L. Morgan, Ed.), Academ£c Press, N. Y., 1975. 71 10. [HALLZDA¥ 67]. Balllday, H.A.K., "Notes on Transltlvlt7 and Theme in ~llsh', Journal of L1n~ulstlcs 3, 1967. 11. [HI~ 75]. Heldocn, G., "Aucp,mted Phrase Structure Grammar', TINLAP-1 Proceedl~s, June 1975. 12. [JOSHI 79]. Joshl, A.K., "Centered Loqlcz the Role of Enttt 7 Centered Sentence Reptuentatton in Natural Language Inferenctng', to appear in IJCAI Proceedinqs 79. 13. [KAMAN 79]. Kaplan, S. J., "Cooperative Responses from a Portable Natural Larquage Data Base Query System', Ph.D. DlSSeratton, Univ. of Pennsylvenia, Philadelphia, Pa., 1979. 14. [MCDONALD 78]° ~tcDonald, D.O., "~_~__h~quent Reference: SynU~cic and Rhetorical Constraints', TINLAP-2 Proceedlrqs, 1978. 15. [MCKEOM~ 79]. McKeown, K., "Peraphramir~j Usinq Given and New Information In a 0uestion-Answr SyStem', forthcoming Master's Thesis, Univ. of Pennsylvania, Phtledelphla, Pc., 1979. 16. [MORGAN and GRE~ 77]. ~organ,J.L. and Green, G.M.: "Pra¢~natlcs and Reedlnq Comprehension s, University of Illlnols, 1977. 17. [ PRINCE 79]. Prince, E., "On the Gtven/Nw Distinction', to appear in CLS 15, 1979. 18. [SIff~ObB and SLOCIR 72]. Simmons, R. and $1ocum, 3., "Generattnq Enqllsh Discourse from Semantic Networks", Univ. of Texas at Austtnw C~r Vol. 5, #10, October 1972. 19. ~LTZ 78]. Waltz, D.L., "An ~,gllsh Langu~e Question Answering System for a Large Relational Database', CA(R, Vol. 21 |7, July 1978. 72
1979
16
WHERE qUESTIONS Benny Shanon The Hebrew University of Jerusalem Consider question (i), and the answers to it, (2)-(h)~ (i) Where is the Empire State Building? (2) In New York. (3) In the U.S.A. (h) On 3hth Street and 3rd Avenue. When (i) is posed in California (2) is the appropriate answer to it. This is the case even though (3) and (h) are also true characterizations of the location of the Empire State Building. The pattern of appropriateness alters, however, when the locale where the question presented changes. Thus, when (i) is asked in Israel, (3) is the appropriate answer, whereas when it is asked in Manhattan, (I~) is the answer that should be given. The foregoing observations, originally made by Rumelhart (197h) and by Norman (1973), suggest the following. First, it is not enough for answers to questions to be (semantically) true, they have to be (pragmatically) appropriate as well. Second, appropriateness is not solely determined by the content of the particular prop- ositions in question, but also by the identity of the participants in the particular conversational situation and their locale. In other words, for a person--or for a machine, for that matter--to answer questions, it is not enough to survey one's memory and retrieve inform- ation pertaining to the query posed, rather--a selec- tion algorithm has to be used so that an appropriate response would be given. The specification of such a selection algorithm is the topic of the present invest- igation. The following discussion is based on what is known as the Room Theory: the original, albeit preliminary, model proposed by Rumelhart(197~) in order to account for his insightful observations. I try to examine the psychological validity of this model, and to propose amendments and extensions to it on the basis of empir- ical data. Theory. '[~le correspondence between the data and the Theory is on two counts. First, there is the seeming- ly trivial observation that answers tO different ques- tions are given on diff,:rent levels. Specifically, there is a correlation between the level of the object which is queried and the room on which the respective answer is given. Second, and less trivial, is the observation that answers vary not only with the ques- tions, but also with the spatial relationship which holds between the object of the question and the parti- cipants in the conversation. Several loci in the data are indicative of this last pattern. First none of the Americans indicated that the Empire State Building was "in the U.S.", bu~ a third of the Israelis did so; further, some of the Americans, but none of the Israelis, indicated that the building was "in Manhattan". Second, asked about New York City, almost all Israelis, but none of the Americans, answered on the country level. Fur- ther, the distribution of the answer patterns furnished by the members of each group changed according to wheth- er the queried city was their own, or close/distant from it. Finally, children's answers to questions about objects also vary with how distant the object is. Above, however, I have qualified the correspondence bet- ween the data and the theory; this qualification should now be clarified. I don't think it is meaningful to Judge the validity of a model llke the Room Theory by examining the percentage of cases in which its predic- tions hold. Such a percentage may reflect the structure of the domain (questions) under investigation, and it need not be indicative of the adequacy of the model as such. The term "by and large" is, however, of qualit- ative significance. It indicates that unless other factors or reasons are operative, answers to where ques- tions do, indeed, follow the Room A/gorithm. The detec- tion of these "other factors and reasons", their class- ification and the characterization of the answer types that correspond to them is the main theme of this dis- cussion. Following, then, are the answer patterns which do not conform with the Room Theory. The Room Theory "posits the existence of a psychologic- al room relative to which distances are reckoned. The room corresponds to the smallest geographical region that encompasses both the reference location of the conversants and the location of the places in question". When answering where-questions "the rule is to find the smallest room which Just includes the reference location and the answer location. The appropriate answer is the next smallest geographical unit which contains the loc- ation in question, but axcludes the reference location". (Rumelhart, 1975). The answers generated by this al- gorithm, note, constitute the placing of the item ques- tioned in a room which is larger than it; henceforth answers of this type will be called vertical. In order to examine the Room Theory, questions regard- ing places in the world as well as objects in a (con- crete) room were presented to several subject popula- tions: college students in Israel and the U.S., Ameri- can children of three age groups, and aphasic patients. The present report concentrates on the adult data, and only cursory remarks will be made on the answers furn- ished by the other populations. First, I will discuss answers solicited by an open questionnaire, in which subjects were asked to give one answer to the questions posel to them; later, answers solicited by closed ques- tionnaires will be discussed. First, it should be noted that by and large the answers given by subjects were the ones predicted by the Room First, consider questions about landm-~ks in the towns in which the conversation took place. Most of the ans- wers which involved vertical placement were given on the level of the town itself, i.e. on a level which is high- er than the one predicted by the Room Theory. The other answers were not vertical, but rather horizontal:the object questioned was related to another object similar to it. In other words, either the level specified by the Room Theory was changed, or the type of answer (i.e. the generation algorithm itself) was altered. These deviant answers are viewed as two 81ternative solutions to the problem of the floor effect. Specifically, as one goes down the place hierarchy, the specification of rooms between the target and the least common room is cumbersome; indeed, there might not be simple names by which reference to these rooms may be made. Subjects solve this problem either by staying on the level of the least common room or by shifting to the horizontal strate~,. The same problem is noted with the ceilin~ effect, name- ly, with questions regarding objects which are very high on the place hierarchy: continents for adults, countries for children and aphasic patients. The answers in these cases were varied, a feature which attests the algOr-- ithm/c difficulty associated with them. Only a minor- ity of the answers conformed with the Room Theory and most answers were horizontal. Other answer types were: vacuous, in which a vertical answer was given on too high a level (e.g. "in the world"), featural, in which 73 a description, rather than a specification of the locale, was given (e.g. "it is a continent"), or tautological (e.g. "Japan is in Japan"). The di~'ferent answer types, we shall say, are the products o~ different alternative answer generation algorithms. The numerical distribu- tion of these answers suggest that the order of prefer- ence for the application of the algorithms as the one noted above. There were also cases in which subjects gave answers on a level lower than the one predicted by the Room Theory. Thus, half the Israelis placed the Empire State Building Tin New York", and not "in the U.S." Similarly, all the Americans asked about the Eiffel Tower answered "in Paris", and not "in France". These patterns are attrib- uted to p romlnence. Prominent objects are ones which gain a higher ra-~ in the place h/erarchy than would be attributed to them on semantic classificatory grounds alone. As a consequence, these objects are placed in a room which is more specific then the one predicted by the Room Theory. For instance, New York City is not conceived of by non-Americans as Just another American city; it gains an autonomy of its own and is conceived of as independent of the country in which it is located. The prominence effect suggests that rather than inter- preting the room-hierarchy in a concrete fashion (i.e. as isomorphic to the spatial relations which hold in the physical world), one should view it as an abstract conceptual representation. In this representation, ob- Jects ere associated with ta~s: usually, objects which are actually contained in objects of order n are assign- ed a tag of order n÷l, but prominent objects are assign- ed tags of the same order as the objects which actually contain them. Thus, if the Empire State Building is tagged n÷l both New York and the U.S. are tngged n, for the Israelis the least com-~n room (order n-l) is the northern hemisphere, and the answer is given on the lev- el of the t~o rooms of order n. Thus, the seemingly unexpected answers associated with prominent objects are due to the modified abstract representations, not to a change in the(vertical) algorithm proper. The salience effect is similar, but distinct. Objects which are close to ones which stand ia a particular rel- ation to the respondent (i.e. physically close, emotion- ally dear, or belonging to the subject) are not placed in a room but receive horizontal answers instead. For example, all the Israelis answered that Lebanon was "north of Israel", and not that it was "in the Mid- East". Similarly, all the Americans (and half of the Israells) placed Canada in relation to the U.S. Unlike the prominence effect, the salience effect does affect the answer generation algorithm itself, and it bears on individual or cultural differences, not on general sem- antic com-lderatlons. 4 Specifically, items which ere special to the speaker are tagged in the representation as marked, and this triggers a shift from the vertical to the horizontal algorlthm. All questions considered so far involved one config- uration: the two conversants and the target were phys- ically distinct, and together they could he contained in one COmmOn room. This, however, is not the only possi- ble con£iguratlon. Other confi&~aratious, are possible as well: (a) The conversants and the target may coin- cide in place, as in the question '~here are we now?". (b) The conversants ~ay be contained in the target, as in the question "Where is Israel?" when Posed in Jerus- alem. (c) The conversants may be in different places, as in phone conversations. Strictly speaklng, the Room Algorithm does not apply to these configuratlons. Thus, in (b) the least common room is one level above that of the target, but on what level would the answer be? The Room Algorithm would either return the respondent to the place queried or else require detailed and perhaps cumbersomDclassific- atlons, neither option is taken. All the answers to the questions noted were given on the room immediately above the target. In (a) a least common room may not be cir- cumscribed in the manner outlined by the Room Algorithm, whereas in (c) a distinction between the speaker and the hearer has to be introduced. All these cases suggest that the different confi~Jrations do invoke different generation algorithms. Hence, an appraisal of the con- Flguration is necessary prior to the application of the answer-generation algorithm proper. So far, the discussion was topological, considering only the spatial configuration holding between the conversants and the object questioned. The respondent 's knowledge of the world was not taken into account. In order to prove the psychological validity of an answer generation algorithm it is crucial to demonstrate that the answer given is chosen from a class of several feasible answers, and is not the only one possible due to a limited data base. This was the purpose of the closed questionnaires. Two such questionnaires were administered: first, sub- Jests were asked to choose the best of several answers given to them; then they were asked to mark all the answers they deemed true. Three points were of interest. First, the answers given in the first two conditions were not necessarily the most specified ones marked in the third. Second, there were answers in the multiple option condition which were evidently true and commonly known but which were nonetheless not marked by subjects. These answers included reversed prominence (i.e. the relation of a prominent object to a less prominent one), featural answers and ones which were too high on the place hierarchy. Third, an "~ don't know" answer on the open questionns/re did not necessarily imply a no- answer in the other conditions. In other words, this answer does not signify complete ignorance, but rather an appreciation on the part of the subject that he can- not f~u'nish the answer he deemm appropriate. Together, the three points indicate that there is indeed a psy- chological process of answer-generation which does not amount to the specification Of the most detailed inform- ation one has regarding the object in question. Still another aspect which has to be considered is the speaker's intention when he poses a question. A study ~ this aspect is just on its way now and at this point, I have to limit myself only to a methodological discuss- ion. Evidently, the process of question-answering re- quires an appraisal of intention (of. Lehnert, !978), one which involves the evaluation of various contertual, personal and sociological factors. In order to make research feasible, as well as constructive, a factor- izatlon of the domain of question-answering, I believe, is needed. In this regard the topological, knowledge and intention aspects were noted. The original Room Theory is an attempt to define the topological aspect. The present study shows that even for this 8spect this Theory is not sufficient. The present discussion sugg- ests that an extended topological theory should consist of the following components : I. Semantic and episodic representations, which are not isomorphic to the physically (logically) defined room-hierar chy. 2. Determinants of confi~Irations and problematic cases (floor, ceiling). 3. A set of ordered answer-generation algorithms: vertical place~nent (the algorithm proposed by the Room Theory), horizontal relation, featural des- crlption and non-informatlve (vacuous, tautolog- Ical) • Definitely, the topological consideration is not suffic- ient for the characterization of how people answer where questions. Future investigations should ex~cend the research and also include considerations of k~owledge and intention. At this Juncture, however, we can no~e 74 that it is not possible to reduce question answering to knowledge alone, and that some formal selection algor- ithms have to be postulated. The formal study of such al~orithms is of relevance to the study of both natural and artificial intelligence. References Lehnert, W.G. The process of question answering. Lawrence Erlbaum and Associates, Hillsdale, N.J. • 1978. Norman, D. Memory, knowledge and the answering of ques- tions. In R. Solso (Ed.) Contemporar~ Issues in Cognitive Psycholo~s', Washington, D.C. : E. Winston, 1973. Rumelhart, D. The Room Theory. Unpublished manuscript, The University of California at San Diego, 197h. 75
1979
17
The Role Of Focussing in Interpretation of Pronouns Candace L. Sidner Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 ;rod Bolt, Beranek and Newman, Inc. 50 Moulton Street Cambridge" MA 02138 In this p;,per I [ discuss the formal relationship between the process of focussing and interpret;ition of pronominal anaphora. The discussion of focussing extends the work of Grosz [1977]. Foct,ssing is defined algorithmical]y as a process which chooses a focus of attention in a discourse and moves it around as the speaker's focus ch'mges. The paper shows how to use the focussing algorithm by ;m extended example given below. DI-I Alfred a,ld Zohar liked to play baseball. 2 They played it everyday after school before dinner. 3 After their game, the two usually went for ice cream cones. 4 They tasted really good. 5 Alfred always had the vanilla super scooper, 6 while Zohar tried the flavor of the day cone. 7 After the cones had been eaten, 8 the boys went home to study. In this example, the discourse focusses initially on baseball. The focus moves in DI-3 to the ice cream cone. Using this example, I show how the formal algorithm computes focus and determines how the focus moves according to the signals which the speaker uses in discourse to indicate the movement. Given a process notion of focus, the paper reviews the difficulties with previous approaches (Rieger [1974], Charniak [1972], Winograd [1971], Hobbs [1975] and Lockman [1978]). Briefly, the first four authors all point out the need for inferencing as part of anaphora disambiguation, but each of their schemes for inferencing suffer from the need for control which will reduce the combinatorial search or which will insure only one search path is taken. In addition, Winograd and Lockman are aware of pronopn phenomena which cannot be treated strictly by inference, as shown below. D2-1 I haven't seen Jeff for several days. 2 Carl thinks h e's studying for his exams. 3 Oscar says hj is sick, 4 but I think he went to the Cape with Linda. 1. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under the Office of Naval Research under Contract Number N00014-73-C4)643. However, their approaches are either simple heuristics which offer no unified treatment (Winograd) or require the computation of a structure which must assume the pronouns have previously been resolved (Lockman). In order to state formal rules for pronoun interpretation, the concept of antecedence is defined computationally as a relationship among elements represented in a database. Using this framework, the paper supports two claims by means of rules for antecedence. I. The focus provides a source of antecedence in rules for interpreting pronominal anaphora. 2. Focussing provides a control for the inferencing necessary for some kinds of anaphora. The rules confirming restrictions The use of D3 below. D3-I 2 for pronominal anaphora rely on three sources of information: syntactic criteria, semantic selectional and consistency checks from inferencing procedures. these rules are presented for examples D2 above and Whitimore isn't such a good thief. The man whose watch he stole called the police. 3 They catzght him. These examples show how to use the three sources of information to support or reject a predicted antecedence. In particular, inferencing is controlled by checking for consistency on a predicted choice rather than by search ~lsing general inference. The paper also indicates what additional requirements are needed for a full treatment of pronominal anphora. These include use of a representation such as that of Webber [197g]; linguistic rules such as the disjoint reference rules of Lasnik [[976] and Reinhart [[976] as well as rules of anapbora in logical form given by Cbomsky [1976]; and presence of actor loci such as they in D3. The nature of these requirements is discussed, while the computational inclusion of them is found in $idner [ 1979]. "77 1. References Charniak, E. [1972] Toward a Mode/ Of Children's Slot 7 Comprehension. M.I.T.A.I. Lab TR-266. Chmnsky, N. [1976] Conditions on Rules o[ Grammar. Linguistic Aqi,!ys_~is Voh,ne 2, p. 303-351. Orosz, Barb;ira [1977] The Representation and Use o[ Focus in Dialogue Understanding. St~,nford Research Institute Technical Note 151, Menlo Park, California Hobbs, Jerry R. [1976] Pronoun Resolution. Research Report ~76-I, City College, City University of New York, New York. Lasnik, Howard [1976] Remarks on Co.re[erenc¢. Linluistic An;~'sis, Volume 2, Number 1. Lockman, Abe D. [1978] Conlextual Re[erenee R•olution in Natural Language Processing. Dept. of Computer Science TR-70, Rutgers University, New Brunswick, N.J. Reinhart, T;mya [1976] The Syntactic Domain of Anaphora. unpublished Ph.D. dissertation, Department of Foreign Literature and LinBuistics, M.I.T. Rieger, Charles J. [1974] Conceptual Memory: A Theory and Compufer Program for Processing Ihe Meaning Content of Natural Language Utterances. Stanford Artificial Intelligence Lab Memo AIM-233. Sidner, Candace L. [1979] To,'ards a Computational Thmr 7 of Definite Anaphora Comprehension in £nglish Discour~. unpublished Ph.D. disseration, Electrical Engineering and Computer Science, M.I.T. Webber, Bonnie Lynn [1978] A Formal Approc~k to Discourse Anaphora. Technical Report 3761, Bolt, Beranek and Newman, Cambridge MA. Winograd, Terry [1971] Procedures as a Repraentatian for Data in a Computer Program for Understanding Natural Language. M.I.T. dissertation. 78
1979
18
The Structure and Process of Talking About Doing James A. Levln and Edwin L. Hutchins Center for Human Information Processing University of Callfornia, San Diego People talk •bout what they do, often •t the same tame a• they are doing. This reporting has •n important function in coordinating aotlon between people working together on real eve~/day problems. Zt is also •n important acts'ca o£ data for social scientists sttu~ylng people's behavior. Xn this paper, we report on some •tudle• we are doing on report dialogues. We describe two kinds of phenomena we have identified, outline a preliminary process model that int•grat•• the report generation with the processes that are generating the actions being reported upon, and specify a systematic methodology For extracting relevant evidence bearing on these phenomena t~om text trenscrlpts of talk about doing to use in evaluating the model. ~ O Z W ~ W Reports of problm solving actions are often used a• evident• about the und•rlying cognitive processes involved in generating a problem solution, as "problem solving protocols" (Howell & Simon, 1972). However, these reports ere obviously a kind of language interaction in their own right, in which the subject i• reportlns on hls/hor own actions to the experimenter. We have analyzed problem solving protocols of people solving a puzzle called "Hlsslonaries and Cannibals" and have found that in their report•, people adopt • • point or view" with respect to the problan, through • con•latent use of spatial detxts, For example, when a subject lays: ".., X can't send another cannibal across with another alssioflary or he will he outnumbered when he gets to the other side .., " the deixis In her report places her as speaker off the • from" side of the considered action, This is indicated both by the choice of the verb "send" and by the description of "the other side". ?he same suhjeot indicated the "to n slde as her point or view in another part or her protocol: ",..'cause you've gotta have one person to hri~ back the boat..." 4 Here, both the verb "bring" and the adverb "back" indicate "point ot view". Although people almost always unmmbiguoualy specify • "point of view" within the problem they are solving in their reports, they also deny awareness of takAn| such • point cF view, However, this point or view is important to the underlying problem solving procesmas. The strongest evidence for this comes ~om the hi|h correlation ~etween Point or view and errors in problem solving actions. Subjects in the ~tlsslonarles and Cannibals task can make errors by Cabin| actions that violate the constraints or the task. Host of these errors occur on the side away frm their current "point st view", even theuah their point of view changes From one physical side to the other during the course or solving the punle, mre interesting is that most of the "undetected" errors emcur on the side •way from their point or view. Some errors arm spontaneously detected by the subject ~mmediately otter askant the action that leads to • violation! others ere "undetected". After the experimenter interrupts topolnt out these undetected errors, the subjects often switch point of view so that the violation condition is now on the same side a• the subjects' point of view. We see the point of view indicated by •patlal delxls In the report of problem solving as reflec~ir~ an underlying allocation of effort (or attention). Pew errors occur with problem elements that •re given processing effort, while constraints that •re given little attention are more often violated. %n this way, these reports are reflecting changes Zn the organization of the problem element• that occur over the course of reaching a solution. We have also identified other ways in which report• embody the use of different conceptual organizations of the problem, including org•nlzatlons that vary from abstract to concrete and from perception oriented to action oriented. JUSTZrZCATZON~~ There •re multi-utterance structures that occur regularly in problem solving talk that we call • justltAoatlon argument structures." These structures have the form of: (did ) (do ) (since ) (could)+(not do)+(aotion).(bsc•use)->(Justitloatlon (will) argument) (Alternatively, these two segments san he reversed in order, by using connectiv•s like "theretora" or "so".) For example, these kinds of dialogue units occur in many ot the protocols studied by Newall & Simon (1972): "hen letter has one and only one numerical value ,e. '([: One numerical value.) There are ten different letters and each of them has one numerloal value. Therefore, Z san, iooklng at the Wo D's each D is 5; therefore, T is zero." (Hewell& 8Amen, 1972:230-231) Zn studying our problem solving protocols tram the ~tseionarles and Cannibals puzzle, we have Identltied several kinds of argument structures, depending on what kinds of problem solving approach ••oh subject took to the problem at each point An time. For example, one common justification argument structure is the • elimination cf alternative•" struntur•: All av•Llabte aotlons A. From this state except A i can be ruled cut. Therefore-do actlon A i, Here is an-example of this Mind of argument struSture: • ... %f Z put a cannibal on, then he gee• hack and the guys on the other side of the river, the misslcnary, is outnumbered and he will be eaten. Zt % put on .... this is all my oonbinatioa• and permutation• .... 2t Z put ton mission•flea on, Z wan two cannibal• on the boat and send them back, then At As Just ridlculou• •t the other end .... so what %'11 have to do As one ot eenh." Another argument rcrm Is one we call "prapatio ark,sent". (We have borrowed many or our naaes for 79 arlmemt etruoturoe Prom a rhotor%o boo~ (Perelmsn & O%hreohta-Tyteea, I~5g).) Altho~h 5hie book Ln I "noru51vo" aooount oF erlmentatLon, we PLnd £t valuable ae a ~Ado to our atSempt ~o l~VO i deaerlptlve aooount or naturaAly eeeurr~ng %nFomal "or|lentat¢en" eoe~rr%n| An our eub~eoto' reports oF theAr problem eolv~ng,) ?he prqitAe er|~mente %at MAng lOt%on A would Lead to relult R (Imonll ocher ~hlnSe). ROltA~t fl Le undoeLrlble. Therefore don't do aotlon A. enemple Prom our pretooole lot 0,,, Hoth ~LeeAonar£ea are IO£ng 5o have to eema boom beoauee. 'oauea %T 5hey don't eemo booM, veil, one ~e~d pt left and eaten. So beth mAeeloaar~oe oeme book.... " One XntoreatAng ~Ant about 5h~a ~rt~e~ar example %8 5hat ~t %e embedded wlthAn an "el~mlnatAon of alternat£voo" arll~Nnt etruoture. The5 %a, 5hAm "prqitAo arluaent" 18 used 50 el~aAnaSe one oF 5he alternatives, leav£ng only one 50 5eke. A third kAad oF arguaen5 atruoSure we have ideaS%Fled 18 railed "ende-moane"t %P erase S oooure, then there ~e an aotAen A to set 5o seal O. ?herefore eesamLAah orate S am a eubleal. for exemplar "... 3e Lr ever % oould |e5 ~hoee ever 5here, % Obviously, 5ham ar|wJent Fern %o similar 50 5he olaJe%o "means-ends oflalye£o" proposed ae ~rt of many serpent 5hear%re oF problem aolv%ng. The arlmmon5 Peru we hive identified bOOer v~en oIPCIAn k~ndo ot underlying oolnit%ve prooeeoing Is IoLng on, end thAI ~,~nd oF protooo% 5ext h~e been Ulld ll evLdonoe for 5his ~ndorXying prooeaming. Some people have lllUmOd t~H|5 5hal ~nd of languap Anteraotion oorreepondo to a euboea of 5he underly£n8 prooeseee (Nevoll i SAmon, ~973). Other people have questioned whether there %e any oorreapondenoe between vha5 people do Imd what 5hey say (NLsbett & ~llmon, 1977). Our position le 5hat 5here %e a Fairly rieh ~nSereoSien between motion and report o~ aot~cn, mioh we will doeorihe %n our report OF our prel~m~ary proaese node/, of doing and rlportinl. (This poeitinn %8 oin£1or 5o one outlined reeently hy Rrloeaon ~ SAmoa (1979).) A ~a~csaa ~ O~ nn~aq AH~ ~ L ~ He have been oonatruotlng a proeeee model oF problem solving ~thin an aot~vatlon preeemo ~unevork (Seven, 1976; 1970). ~15hAn 5hie FremevorK, nultAple proneness are 8%nultaneoue~y aot/.ve, end 5he 5he %nteraoC~ona between 5he aatlve prooeasea %o epeo~tAed by 5heir re~eeonCotiona %n a netvorM otruotur~ %one term memory. Emoh prooeoe %e so+lYe a oct+sAn aununS, with a oor~aAn smotmt oF nalIAenoln, and ~he more oaIAent a preoeaa As, 5he lar|er %5e %npae5 on oSher presences (and therefore on the overall prooeaoLng). There ere prooeleee tha~ ere oloeely relltld ~O the ~r~romnoe oF 5he problem tooK, lad o~here tt~c are oZoeely related to the report of the task aoClona. ~n the psr~lo,,~ar problem demean of the H~aeionsr£ee and CannLbae8 pusxle, 5he ~aek POliCed ao~Lonl and obJeo~e are defined as oonoopco An the long 5emnemory thaC beoome aoClve durlng the ?rob/.em solving. The oonsCrl£n58 of 5he problem ape represented Ln 5he name way, and leC aoSivoSed 5o varying delrwee during 5he problem so/.v~ng. ~-roro ooour when the oonacre/.n~e are %neutrlolenC%y 8aAAenC to prevent an notion wh£oh landl ~o a v~o/.oS/.on oF 5ha& oonecraAnC. Report related proeeaeoe impost 5he tao~ behavior by mod%Fy£n| 5he distribution o~ lelAenoe 5o 5he 5ask related proneness, "Point oP v~ew" of 5he problem lOlver hll L51 Ampeo~ on the presses%n| by add%n| ealAenea 5o 5hose ooSAvo eonoopte Jesse%sand ~th looatLon where the problem ~lver ham oonooptuaLly looated hAmthereelF, ~uet%P%eat%on arjUmlmt etruoturea l~l~lirly Ampao5 5he d%etr%butAon of emlAenoo by ~noreao%ng 5he sa%%enoe st or 5.see LnFerenee prooealol defined to be 8llJOO£lted wAth the arlumen5 structures. ~n 5h~e ~sy, ~aKua|e san lad 5he problem solving, by addle| 50 5he roeouraoe of 5he 5nAked soon5 proooeaeo, %t ann sees h~nder %t %t looks the problem solver into a psrtAoular orlenLutLon oF the problem 5hit %On't f~U~tFul, rap example, to 5he extent 5h~t llmlus|e use Foouaeea eaIAenoo sway From oonetrlAntl t~5 ire beL~J v%olated oaul~ng effete, end elpoOLlALy LF 5hAl ooourl to euoh In extent 5ha5 5hone IPrOrl Ire undeSeoSed, thin the FoOUSlL~ll eFFeot oF languilo elm be l bert%It 5o solving the problem. 80 tar, we have deoor%ba none phenomena ve Mve observed In our solleetion or problem solvlM reports, and also m prel~,,%nary proooea model st problem eo%v~nJ aat~on and report, How san we use sup data 5o evalu|te our model? ?here are Ray Levels oF evaluative tent:Leg that we could use. At one extreme, 5heor4eo sin be strongly evaluated by doriv:Ln| prodAot~ono Prom 5hem of' epoolF:Lo da5a, vhAoh Le 5hen eo%leo5ed. I~peo~ally when 5he prod:Late4 da5a are unexpeotedt th:Lo prov:Ldee a r~Joroua 5net OF s theory. At another extreme ~0 a "ouFtAo~enoy teacn (Howell & 81mona 19TO), A model oF an orpn£mD porform.'Lnl name tael¢ pasha 5he euf'Fio%enoy 5on5 %F Lt aloe san perForu ohm name tank. Than %e the evaluatAon 5eat oemmon%y used today For strafe.sial AntelIA|enee models. A more r~l;orous 5eat %e 5n Cry to F~.5 a mode/. 50 • emma OF data. ?hAs ~e the evaluate.on 5eahn~quo moot often UJld today An evaluat~,ng ooln~tlve poyoholo|y 5hoor~ea. A Fourth Ceo~lqua %e to %denSity a set of "or£cioxl" phenomena In 5he data spinet MtAoh to evaluate a mode% OF the5 data, AI ~lluotratod ~.n the liJ~ below, th:Li £s a more powerful evaluation teoiutique 5hit e~nple euFt~o£enoy, but lees povorF~ 5hsn 5he other two 5eohnLquee. Vo Fee). 5hat It 5has point In 5he scats OF the opt, 5hie te 5he appropriate evoluat~on 500hnique to use 5o evalunto our presell modll ~n IAIh5 of our dltl. 1, -qLa~in{ilqnv; DOle the nodo~, I~obel~y porFoM0 ~ke the behavior being mdollld? 2. ~ ohann~: Dose 5he model exhib15 behavior that oorreeponda to observed seleo~ed • oritioaA phenomena n An the dal~a of interest? 3. Close ~ ~ ~S ~n the model exhih:L?, beJ'dltvior that oorreeponds oAoeely to the nneo of do5o el' interest? ~, Pridln~Lon ~ uni~nia~id dlEt~| Clfl the nodv4 exnib/.5 unexpeocod behavior the5 then son be observed? %n order 50 emtreo5 5he phenomena we hove Adent~F%ed An our data For urns Ln evaluating our model, we have boon develop:Leg sod/no 5eol~niquee 5hat are used by trained human oodoPe. Theme oodere de~eo5 and annotate the oeourrenoe OF 5hems phenomena Ln ~,rammor:Lp~e OF prob/.en solvlng &oak. For exmmp/.e, we have been able to treAn ooder8 I:o reZAsbly de~erm£ne a 80 "point of view" for a problem solver at each point in the problem solving from a record of the problem solving report and a record of moves made. Then, we use this extracted trace to evaluate our model of the role of point of view in problem solving. SUMMARY We have reported here a three pronged approach to the study of problem solving action and report: I) the collected of data on problem solving and talk about problem solving, 2) development of a process model of these behaviors, and 3) use of coding techniques to extract traces of "critical phenomena" from the transcripts for evaluating the model. So far, we have focussed our efforts on two types of problem solving phenomena: the changes in the problem solver's organization of the problem ("point of view"), and systematic multl-utterance structures used to express the forms of inference used to solve the problem ("Justificatlon argument structures"). Ericsson, K.A., & Simon, H.A. Thlnking-aloud protocols as data: Effects of verbalization. Pittsburgh, PA: Carnegle-Mellon University, C.I.P. Working Paper #397, 1979. Levin, J.A. Proteus: An actlvation framework for cognitive process models. Marina de1 Rey, CA: Information Sciences Institute, ISI/WP-2, 1976. Levin, J.A. Continuous processing with a discrete memory representation. Paper presented at The LNR Confluence, La Jolla, CA: Center for Human In/ormation Processing, UCSD, 1978. Newell, A., & Simon, H.A. Human oroblem solvln~. Englewood Cliffs, NJ: Prentlce-Hall, 1972. Nisbett, R.E., & Wilson, T.D. Telling more than we can know: Verbal reports on mental processes. Psychological Review. 1977, 84, 231-259. Perelman, C., & Olbrechts-Tyteca, L. The n e w ~ : A treatise o n ~ r ~ . Notre Dame, IN: University of Notre Dame Press, 1969. 81
1979
19
TOWARDS A SELF-EXTENDING PARSER Jaime G. Carbonell Department Of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 Abstract This paper discusses an approach to incremental learning in natural language processing. The technique of projecting and integrating semantic constraints to learn word definitions is analyzed as Implemented in the POLITICS system. Extensions and improvements of this technique are developed. The problem of generalizing existing word meanings and understanding metaphorical uses of words Is addressed In terms of semantic constraint Integration. 1. Introduction Natural language analysis, like most other subfields of Artificial Intelligence and Computational Linguistics, suffers from the fact that computer systems are unable to automatically better themselves. Automated learning ia considered a very difficult problem, especially when applied to natural language understanding. Consequently, little effort ha8 been focused on this problem. Some pioneering work in Artificial intelligence, such as AM [I] and Winston's learning system 1"2] strove to learn or discover concept descriptions in well-defined domains. Although their efforts produced interesting Ideas and techniques, these techniques do not fully extend to • domain as complex as natural language analysis. Rather than attempting the formidable task of creating a language learning system, I will discuss techniques for Incrementally Increasing the abilities of a flexible language analyzer. There are many tasks that can be considered "Incremental language learning". Initially the learning domain Is restricted to learning the meaning of new words and generalizing existing word definitions. There ere a number of A.I. techniques, and combinations of these techniques capable of exhibiting incremental learning behavior. I first discuss FOULUP and POLITICS, two programs that exhibit a limited capability for Incremental word learning. Secondly, the technique of semantic constraint projection end Integration, as Implemented in POLITICS, Is analyzed in some detail. Finally, I discuss the application of some general learning techniques to the problem of generalizing word definitions end understanding metaphors. 2. Learning From Script Expectations Learning word definitions In semantically-rich contexts Is perhaps one of the simpler tasks of incremental learning. Initially I confine my discussion to situations where the meaning of a word can be learned from the Immediately surrounding context. Later I relax this criterion to see how global context and multiple examples can help to learn the meaning of unknown words. The FOULUP program [3] learned the meaning of some unknown words in the context of applying s script to understand a story. Scripts [4, 5] are frame-like knowledge representations abstracting the important features and causal structure of mundane events. Scripts have general expectations of the actions and objects that will be encountered in processing a story. For Instance, the restaurant script expects to see menus, waitresses, and customers ordering and eating food (at different pre-specifled times In the story). FOULUP took advantage of these script expectations to conclude that Items referenced in the story, which were part of expected actions, were Indeed names of objects that the script expected to see. These expectations were used to form definitions of new words. For instance, FOULUP induced the meaning of "Rabbit" in, "A Rabbit veered off the road and struck a tree," to be a self-propelled vehicle. The system used information about the automobile accident script to match the unknown word with the script-role "VEHICLE", because the script knows that the only objects that veer off roads to smash Into road-side obstructions ere self propelled vehicles. 3. Constraint Projection In POLITICS The POLITICS system E6, 7] induces the meanings of unknown words by a one*pass syntactic and semantic constraint projection followed by conceptual enrichment from planning and world-knowledge inferences. Consider how POLITICS proceeds when It encounters the unknown word "MPLA" In analyzing the sentence: "Russia sent massive arms shipments to the MPLA In Angola." Since "MPLA" follows the article '*the N it must be a noun, adjective or adverb. After the word "MPLA", the preposition "in" Is encountered, thus terminating the current prepositional phrase begun with "to". Hence, since all well-formed prepositional phrases require a head noun, and the "to" phrase has no other noun, "MPLA" must be the head noun. Thus, by projecting the syntactic constraints necessary for the sentence to be well formed, one learn8 the syntactic category of an unknown word. it Is not always possible to narrow the categorization of a word to a single syntactic category from one example. In such cases, I propose Intersecting the sets of possible syntactic categories from more then one sample use of the unknown word until the Intersection has a single element. POLITICS learns the meaning of the unknown word by a similar, but substantially more complex, application of the same principle of projecting constraints from other parts of the sentence and subsequently Integrating these constraints to oonetruot a meaning representation. In the example above, POLITICS analyzes the verb "to send" as either in ATRANS or s PTRAflS. (Schank [8] discusses the Conceptual Dependency case frames. Briefly, a PTRANS IS s physical transfer of location, and an ATRANS Is an abstract transfer of ownership, possession or control.) The reason why POLITICS cannot decide on the type of TRANSfer is that it does not know whether the destination of the transfer (i.e., the MPLA) Is s location or an agent. Physical objects, such as weapons, are PTRANSed to locations but ATRANSed to agents. The conceptual analysis of the sentence, with MPLA as yet unresolved, Is diagrammed below: *SUSSIA* <-~ •[CIPSl <is> LOC vii ~qNGOLAe t l mlq.R) RTRRNS • d IN, iq[CIPill I IN< ,,ffi/$SIRi, I J~ERPONe <ls~ NWISER vii (, llOMI) What has the analyzer learned about "MPLA" as s result of formulating the CD case frame? Clearly the MPLA can only be an actor (I.e., s person, an Institution or s political entity in the POLITICS domain) or s location. Anything else would violate the constraints for the recipient case In both ATRANS end PTRANS. Furthermore, the analyzer knows that the location of the MPLA Is Inside Angola. This Item of Information is integrated with the case constraints to form a partial definition of "MPLA". Unfortunately both Iocatlcms and actors can be located inside countries; thus, the identity of the MPLA is still not uniquely resolved. POLITICS assigns the name RECIP01 to the partial definition of "MPLA" and proceeds to apply Its Inference rules tO understand the political Implications of the event. Here I discuss only the Inferences relevant for further specifying the meaning of -MPLA m . 4. Uncertain Inference in Learning POLITICS Is a goal-driven tnferencer. It must explain ell actions In terms of the goals of the actors and recipients. The emphasis on inducing the goals of actors and relating their actions to means of achieving these goals is Integral to the theory of subjective understanding embodied in POLITICS. (See [7] for a detailed discussion.) Thus, POLITICS tries to determine how the action of sending weapons can be related to the goals of the Soviet Union or any other possible actors involved in the situation. POLITICS k~s that Angola was Jn a state of civto war; that Is, a state where political factions were .'xerclstng their goals of taking military and, therefore, political control of a country. Since po6ssssing weapons Is a precondition to military actions, POLITICS infers that the recipient of the weapons may have been one of the poliUcal factions. (Weapons ere s means to fulfUllng the goal of • political faction, therefore POLITICS Is able to explain why the faction wants to receive weapons.) Thus, MPLA Is Inferred to be a political faction. This Inference is Integrated with the existing partial definition and found to be consistent. Finally, the original action Is refined to be an ATRANS, as transfer of possession of the weapons (not merely their k:mation) helps the political faction to achieve Its military goal. Next, POLITICS tries to determine how sending weapons to s military faction can further the goals of the Soviet Union. Communist countries have the goal of spreading their ' Ideology. POLITICS concludes that this goal can be fulfilled only if the government of Angola becomes communist. Military aid to s political faction has the standard goal of military takeover of the government. Putting these two facts together, POLITICS concludes that the Russian goal can be fulfilled if the MPLA, which may become the new Angeles government, is Communist. The definition formed for MPLA Is ae follows: QI~'I i~a1"~ tntrvI (OPS flPLA (POS NOUN (TYPE PROgI[R))) (TOK efllq.A.) ) (PARTOF. luRN6OLR.) (|oEOLOGY . ~¢OiltlUN|STe) (GORLSt ((ACTOR (*flPLA*) iS (SCONT O§JI[CT (dN6OLRe) Vm. (IR)))))P The reason why memory entries are distinct from dictionary definitions is that there is no one-to-one mapping between the two. For Instance, "Russia" and "Soviet Union" are two separate dictionary entries that refer to the same concept in memory. Similarly, the concept of SCONT (social or political control) abstracts Information useful for the goal-driven inferences, but has no corresponding entry in the lexicon, as I found no example where such concept was explicitly mentioned In newspaper headlines of political conflicts (i.e., POLITICS' domain). Some of the Inferences that POLITICS made are much more prone to error than others. More specifically, the syntactic constraint projections and the CD case-frame projections ere quite certain, but the goal-driven Inferences are only reasonable guesses. For Instance, the MPLA coWd have been • plateau where Russia dePosited Its weapons for later delivery. 5. A Strategy for Dealing with Uncertainty Given such possibilities for error, two possible strategies to deei with the problem of uncertain inference come to mind. First, the system could be restricted to making only the more certain constraint projection and integration inferences. This does not usually produce s complete definition, but the process may be Iterated for other exemplars where the unknown word Is used in different semantic contexts. Each time the new word Is encountered, the semantic constraints are integrated with the previous partial definition until a complete definition is formulated. The problem with this process Is that it may require a substantial number of iterations to converge upon s meaning representation, end when it eventually does, this representation wtll not be as rich as the representation resulting from the less certain goal-driven inferences. For Instance, it would be impossible to conclude that the MPLA was Communist and wanted to take over Angola only by projecting semantic constraints. The second method is based on the system's ability to recover from inaccurate inferences. This is the method i implemented in POLITICS. The first step requires the deteotlon of contradictions between the Inferred Information end new Incoming information. The next step is to assign blame to the appropriate culprit, i.e., the inference rule that asserted the incorrect conclusion. Subsequently, the system must delete the inaccurate assertion and later inferences that depended upon it. (See [9] for a model of truth maintenance.) The final step is to use the new information to correct the memory entry. The optimal system within my paradigm would use a combination of both strategies - It would use Its maximal Inference capability, recover when Inconsistencies arise, and iterate over many exemplars to refine and confirm the meaning of the new word. The first two criteria are present in the POLITICS implementation, but the system sto~s building a new definition after processing a single exemplar unless it detects a contradiction. Let us briefly trace through an example where PC~.ITICS la told that the MPLA is indeed a pisteau after it inferred the meaning to be a political faction. I POLITICS Pun -- 2/06/76 ! • : INTERPRET US-CONSERVRT IVE) INPUT STORY, Russia sent massive arms ship.eats to the flPL.A in Re,gels. PARSING... (UNKNOUN UOROI MPLA) :SYNTACTIC EXPECTATION! NOUN) (SERRNTIC EXPECTATION; (FRANC: (ATRONS PTRONS) SLOTI RECIP REQ, ILOC ROTOR))) COflPLETEO. CREATING N( u MEMORY ENTRY, *flPLRo INFERENCE, ~,MPLRo MIAY BE A POLXTICI:n. FACTION OF mARGOt.fiG |NFEfl(NCE, eflUSSIAe RTRRNS eRRMSo TO tAPLRo INFERENCE; *MPLAe IS PNOOROLY aCOflMUNXSTe INFERENCE, GOAL OF aMPLRa IS TO TAK( OVEN eANOOl.Ae INSTANTIATING SCAIPTJ SRIONF INFERENCE; GOAL OF eRUSSIAa I$ toNGOLflo TO BE ¢comflNl|$Te I Question-salem- dialog ) 441hst does the MPLA ~ent the arms foP? TNE RPLR MANTa TO TAKE OVER RNGOLR USING THE NEIMONS. I~he( might the ether factionS in An(iolll de? THE OTHER FACTIONS NAY ASK SORE OTHER COUNTRY FOR RRflS. | Reading furthcP Input ] INPUT STORY; +The Zunqabl faction oleoPatlng fPoe the I~PLA plateau received the $ovist uealNme. PARS |NO... CONPLETEO • GREAT|NO NEW N(NORY ENTRY: aZUNGRO|a ACTIVE CONTEXT RPPLJCRItLE, ~IONF C1 ISR CONFLICT, eMPLRe ISR (eFRCTIONo sPI.RTERUe) (ACTIVATE' (|NFCN(CK C|)) R(OUEST(O C2 SCRIPT ROLE CONFLICT, (&R[O-RECXP |N SRIOMF) • aMPLRe RNO aZUNGABIe (ACTIVATE (INFCHECK C2)) RE~JEST[O (INFCHECK C1 C2) INVOKEOt RTTERPT TO MERGE MEMORY ENTRIES, (*M~.Ae aZON~Ia)...FAIUJRE' INFER(lICE RULE CHECK(O (RULEJFI . SRIOMF)...OK INFERENCE RUt.E CHECKED (flULEIGO)...CONFLICT! OELETING RESULT OF RULE/GO C2 RESOt.VEDt ~f'~'LRe ]SA *PLRTEIqJe IN eRNGOLRs C2 flESOLVEO; UlAI?-RECIP IN SRIOMF) • eZONGROIo REDEFINING enPLRe AS eZUNGRe|O...COMPI.IrTEO. CREATING HEM orlPLRo fl(NORY (NTNY...CORPLET(O. POLITICS realizes that there is an Inconsistency In Its Interpretation when It tries to integrate "the MPLA plateau" with its previous definition of "MPLA". Political factions and plateaus ere different conceptual classes. Furthermore, the new Input states that the Zungsbl received the weapons, not the MPLA. Assuming that the Input Its correct, POLITICS searches for an Inference rule to assign blame for the present contradiction. This Is done simply by temporarily deleting the result of each inference rule that was activated in the original interpretation until the contradiction no longer exists. The rule that concluded that the MPLA was a political faction Is found to resolve both contradictions If deleted. Since recipients of military aid must be political entitles, the MPLA being s geographical location no longer qualifies as a military aid recipient. Finally, POLITICS must check whether the inference rules that depended upon the result of the deleted rule are no longer applicable. Rules, such as the one that concluded that the political faction was communist, depended upon there being a political faction receiving military aid from Russia. The Zungabi now fulfll:s this role; therefore, the inferences about the MPLA are transfered to the Zungabl, and th~ MPLA Is redefined to be a plateau. (Note: the word "Zungabl" was constructed for this example. The MPLA is the present ruling body of Angola.) 6. Extending the Project and Integrate Method The POL)TICS Implementation of the project-and-integrate technique ts by no means complete. POLITICS can only Induce the meaning of concrete or proper nouns when there Is sufficient contextual information In a single exemplar. Furthermore, POLITICS assumes that each unknown word will have only one meaning. In general It is useful to realize when a word Is used to mean something other than Its definition, and subsequently formulate an alternative definition. I Illustrate the case where many examples are required to narrow down the meaning of s word with the following example: "Johnny told Mary that If she didn't give him the toy, he would <unknown-word) her." One can induce that the unknown word Is a verb, but its meaning can only be guessed at, In general terms, to be something unfavorable to Mary. For Instance, the unknown word could mean "take the object from", or "cause injury to". One needs more then one example of the unknown word used to mean the same thing In different contexts. Then one has s much richer, combined context from which the meaning can be projected with greater precision. Figure 1 diagrams the general project-and-integrate algorithm. This extended version of POLITICS' word-learning technique addresses the problems of iterating over many examples, multiple word definitions, and does not restrict its Input to certain classes of nouns. 7. Generalizing Word Definitions. Words can have many senses, some more n"neral than others. Let us look at the problem of gen lizlng the semantic definition of a word. Consider the case where "barrier" is defined to be a physical object that dlsenables a transfer of location. (e.g. "The barrier on the road Is blocking my way.") Now, let us interpret the sentence, "Import quotas form a barrier to International trade." Clearly, an Import quota Is not • physical object. Thus, one can minimally generalize "barrier" to mean "anything that disc.shies s physical transfer of location." Let us substitute "tariff" for "quota" In our example. This suggests that our meaning for "barrier" is insufficiently general. A tariff cannot disensble physical transfer; tariffs dime.able willingness to buy or sell goods. Thus, one can further generalize the meaning of barrier to be: "anything that dlaenablee any type of transfer", Yet, Urea trace of the FIght 1: The prijeat-a.d-lntsgPete Nthed far Indu@l~ Re. ueP4 and :oe~ept detlnitleml contalnl.| •hi URK~O~ •lard PROJECT the s~ntaetie Imd semantic ¢onstrai.tl! fPoa eft Imelvslt of the other eowDonints ]N~qRTE • 1! Oh• ©onttrilntl tQ tM, imlite • wd deflflltl(m INTEGRRTE 91ob•l Cento•t to (mrlch 4Qtlnitiqm I COn•cut"air OlealmPseaedelp Jml goil,.dPiwm Int.fqm~te NO emcm~ in t M, Imee~. u•Ing a I•eet- q:m" •also-! IP•| NO [111101 Postul•te • mm .erd same aml build a I terlqlte defiflitie~ Delete culpell Inf•r~e mid ~.J generalization process must be remembered because the original meaning is often preferred, or metaphorically referenced. Consider: "The trade barriers were lifted. • and "The new legislation bulldozed existing trade barriers. • rheas sentences can only be understood metaphorically. rhat is, one needs to refer to the original meaning of ~barrier" as a physical object, In order for •lifting" or 'bulldozing" to make sense. After understanding the literal leaning of a "bulldozed barrier", the next step Is to infer he consequence of such aft action, namely, the barrier no )nger exists. Finally, one can refer to the generalized leaning of "barrier" to interpret the proPoaltion that •The ew legislation caused the trade barriers to be no longer In xietence." propose the *ollowing rules to generalize word definitions ld understand metaphorical references to their ortglnol, mmel definition: 1 ) If the definition of a word violates the semantic constraints projected from an interpretation of the rest of the sentence, create a new word-sense definition that copies the old deflnltiml minimally relaxing (I.e., generalizing) the violated constraint. 2) In Interpreting new sentences always prefer the mast specific definition if applicable. 3) If the generalized definition Is encountered again in Interpreting text, make It part of the permanent dictionary. 4) If • word definition requires further generalization, choose the existing most general definition and minimally relax Its violated semantic constraints until a new, yet more general definition Is formed. 5) If the case frame formulated in interpreting a sentence projects more specific semantic constraints onto the word meaning than those consistent with rite entire sentence, Interpret the word usln(! the most specific definition conslste.t with the case frame. If the resultant meaning of the case frame Is inconsistent with the interpretation of the whole sentence, Infer the most likely consequence of the pMtlally-build Conceptual Dependency case frame, and use this consequence In Interpreting the rest of the sentence. The process described by rule 5 enables one to Interpret the metaphorical uses of words like "lifted" and "bulldozed" In our earlier examples. The literal meaning of each word i8 applied to the object case, (i.e., "barrier•), and the Inferred consequence (i.e., destruction of the barrier) i8 used to Interpret the full sentence. 8. Coral.cling Remarks There are a multitude of ways to incrementally Improve the language understanding capabilities of a system. In this paper I discussed in some detail the process of learning new w~rde. In lesser detail I presented some ideas on how to generalize word meanings and Interpret metaphorical uses of individual words. There are many more aspects to learning language and understanding metaphors that I have not touched upon, For Instance, many metaphors transcend Individual words and phrases. Their Interpretation may require detailed cultural knowledge [10]. In order to place some perspective on project-and-integrate learning method, consider throe general learning mechanisms capable of implementing different aspects of Incremental language learning. Learning hy example. This Is perhaps the most general learning strategy. From several exemplars, one can intersect the common concept by, If necessary, minimally generalizing the meaning of the known part of each example until a common aubpart Is found by Intersection. This common eubpart Is likely to be the meaning of the unknown section of each exemplar. Learning by near-miss analysis. Winston [2] takes full advantage of this technique, it may be usefully applied to a natural language system that can Interactlveiy generate utterances using the words it learned, and later be told whether It used those words correctly, whether It erred seriously, or whether It came close but failed to understand a subtle nuance In meaning. Learning by contextual expectation. EasanUally FOULUP and POLITICS use the method of projecting contextual expectations to the linguistic element whose meaning Is to be Induced. Much more mileage can be gotten from this method, especially If one uses strong syntactic constraints and expectations from other knowledge sources, such as s discourse model, s narrative model, knowledge about who is providing the information, and why the information Is being provided. 9. References T. 2. 3. 4. 5. 6. 7. 8. 9. TO. Lenet, 0. AMz Discovery In Mathematics as Heuristic Search. Ph.D. Th., Stanford University, 1977. Winston, P. Learning Structural Descriptions from Examples. Ph.D. Th., MIT, 1970. Granger, R. FOUL-UPt A Program that Figures Out Meanings of Worcls from Context. IJCAI-77, 1977. Schank, R. C. and Abelson, R.P. Scripts, Goals, Plans and Unclerstancling. Hillside, NJ: Lawrence Erlbaum, 1977. Cullingford, R. Script Appllcationt Computer Uncleratandlng of Newspaper Stories. Ph.D. Th., Yale University, 1977. Carbonell, J.G. POLITICS: Automated Ideological Reasoning. Cognitive Science 2, 1 (1978), 27-51. Carbonell, J.G. Subjective Unclerstancllng: Computer Mo<lels of Belief Systems.. Ph.D. Th., Yale University, 1979. Sohsnk, R.C. Conceptual Information Processing. Amsterdam: North-Holland, 1975. Doyle, J. Truth Malntenanoe Systems for Problem Solving. Master Th., M.I.T., 1978. Lakoff, G. and Johnson, M. Towards an Experimentalist Philosopher: The Case From Literal Metaphor. In preparation for publication, 1979.
1979
2
DIKSIGN FOR I)IALOGUE COMPREHENSION William C. Mann USC Information Sciences Institute Marina del Rey, CA April, 1979 This paper describes aspects of the design of a dialogue comprehension system, DCS, currently being Implemented. It concentrates on a few design innovations rather than the description of the whole system. The three areas of innovation discussed are: 1. The relation of the DCS design to Speech Act theory and Dialogue Game theory, Z. Design assumptions about how to identify the "best" interpretation among several alternatives, and a method, called Preeminence Scheduling, for implementing those assumptions, 3. A now control structure, tlearsay-3, that extends the control structure of llearsay-l[ and makes Preeminence Scheduling fairly straightforward. I. Dialogue Games, Speech Acts and DCS -- Examination of actual human dialogue reveals structure extending over • ~overal turns and corresponding to partlcular issues that the participants raise and resolve. Our past work on dialogue has led to an account of this structure, Dialogue Game theory fLorin & Moore 1978; Moore, l,evlu & Mann 1977]. This theory claims that dialogues (and other language uses as well) are comprehensible only because the participants are making available to each other the knowledge of the goals they are pursuing, at ~he p~omcnt, Patterns of these goals recur, representing language conventions: their theoretical representations are called Dialogue Games. If a speaker employs a particular Dialogue Game, that fact must be recognized by the hearer if the speaker is to achieve the desired effect. In other words, Dialogue Game recognition is an essential part of dialogue comprehension. Invoking a game is an act, and terminating the ongoing use of a game is also an act. Dialogue game theory has recently boon extended [Mann 1079] in a way makes these game-related acts explicit Acts of Bidding a game, Accepting a bid, and Bidding termination are formally defined as speech acts, comparable to others In speech act theory. So, for example, in the dialogue fragment below, Ct "Morn, l'm hungry." M." "Did you do a good Job on your Geography homework?" the first turn bids a game called the Permission Seeking game, and the second turn refuses that bid and bids the Information 5caking game. DCS is designed to recognize people's use of dialogue /~.ames in transcripts. For each utterance, it builds a hierarchlal structure representing how the utterance performs certain acts, the goals that the acts serve, end thn goal structure that makes the combination of acts coherent. (The data structure holding this information is described holow in the discussion of llearsay-3.) II. Preeminence Scheduling -- It seems inevitable that any system capable of forming the "correct" interpretation of most natural langua~,e usage will usually be able to find several other interpretations, given enough opportunity. It is also inevitable that choices bo made, implicitly or explicitly, among interpretations. The choices will correspond to some Internal notion of quality, also possibly implicit. The notion of quality may vary. but the necessity of makin/', such choices does not rest on the particular notion of quality we use. Clearly, it is also important to avoid choosing a single interpretation when there are several nearly equally attractive ones. What methods do we have for making such choices? Consider three approaches. I. First-find.. The first Interpretation discovered which satisfies well-formcdness is chosen. The effectiveness of first-find depends on having well-informed, selective processes at every choice point, and is only reasonable if one's expectations about what might be said are very good. Even then, this method will select incorrect interpretations. Z. Bounded search and ranked choice. Interpretations are generated by a bounded-effort search, each is assigned an individual quality .score of some sort, and the best is chosen. While this will not miss good but unexpected interpretations missed by first-find, it is wrong in at least two ways: a) it selects an interpretation (and discards others) when the quality difference between interpretations is insignificant, and b) it expends unnecessary resources making absolute quality Judgments where only relative Judgments are needed. These defects suggest an lmprovemenh 3. Preeminence selection= perform a bounded-effort search for interpretations, and then select as beat the one (if any) having a certain threshold amount of demonstrable preferability over its competitors. The key to corre::t choice is determination that such a threshold difference in quality exists. DCS is designed to identify preeminent interpretations. Consider the information content in the fact that the best two interpretations have a quality difference exceeding a fixed threshold. This fact is sufficient to choose an interpretation, and yet it carries less information than is carried in a set of quality scores for the same set of interpretations. C~omputaUonal efficiencies are available because the work of creating the excess information can be avoided by proper design. 83 Given s tentative quality scoring of one's alternatives, several kinds of computations can be avoided. For the highest-ranked interpretation, it is pointless to perform computations whose only effect is to confirm or support the interpretation, (even thongh we expect that for correct interpretations the ways to show confirmation will be numerous), since these will only drive its score higher. For interpretations with inferior ranks, it is likewise pointless to perform computations that refute them (although we expect that refutations of poor interpretations will be numerous), since these will only drive their scores lower. Neither of these is relevant to demonstrating preeminence. Given effective controls, computation can concentrate on refuting good interpretation• and supporting weak ones. (Of" course, such computations will sometimes move 8 new interpretation into the role of highe•t-renked. They may also destroy an eppsrent preeminence.) If the gap in quality rating between the highest ranked interpretation end the next one rams/no significant, then proem/nonce has been demonstrated. Further efficlencles are possible provided that the maximum quality r•ting improvement front untr/ed support computation• can be predicted, since it is then posstblo to find case• for which the m•ximum support of • low-ranked interpretation would not eliminate an existing preeminence. Similar efficlencies can arise from predicting the max/mum loss 6f quality available from untr/ed refuter/one. This approach ls being implemented in DCS, IIL Control Structure -- • new AI programming environment called Hearsey-3 is being implemented at ISI for use in development of several systems. It is an augmentation and major revision of some of the control and data structure ideas found in He•rsey-ll [Lesser & Erman 19773, but it is independent of the speech-understandlng task. Hecruy-3 retains lnterprecess communicetion by means of global "blackboards," end it represents its process knowledge in many specialized "knowledge source" (KS) processes, which nominate themselves at appropriate t/rues bY looking at the blackboard, and then are opportunistically scheduled for execution. Blackbcerds are divided into "levels" that typically contain distinct kinds of state knowledge, the distinctions being ~jed as a gross filter on which future KS computation• ere considered. Hearsay-3 retsi,s the idea of a domain-knowledge blackboard (BB), and it adds a knowledge source scheduling blackboard (SBB) as well. Items on the SBB are opportunities to exercise particular scheduling speclslists celled Schedulln~ Knowledge Sources (SKS). The SBB Is •n ideal data structure For implementin~ Prominence scheduling. In DCS the SBB has four levels, called Refutation, Support, Evaluation and Ordinary-consequence. These correspond to a factoring of the domain K5 into four groups according to their effects. Knowledge sources in each of these groups nominata themselves onto a different level of the SBB. The scheduling-knowledge sources (SKS) perform preeminence scheduling (when a suitable range of alternatives ls available) by selecting available Refutation level opportunities for the highest-ranked interpretation and Support level opportunities for inferior ones. (The SBB and SKS Features of HearMy-3 •re only two of its many innovation•. ) The DCS B8 has 6 levels, named Text. Word-sense•, Syntax, Proposition•, Speech-acts •nd Goals. Goals and goal structures, which •re required in any successful analysis, only arise as explanations of speech acts. The KS used for deriving speech acts from utterances •re seperete from those deriving goals from speech acts. The hierarchic data structure representing an interpretation of •n utterance consists of units at vsrtou~ level• on the He•rsey-3 blackboard. USING DCS These Innovations and sever•l others will be tested in DCS in •ttempts to comprehend human dialogue ~athered from non-laboratory situ•tton•. (One of these L5 Apollo astronaut to ground communication.) Transertpis of actual interpersonal dialogue• •re p•rtlcularly advantageous as study materiel, because they show the effects of ongoin~ communication •nd because they are free of the bieses and narrow view• inev/table in made-up example•. ACKNOWLEDGMENTS The work reported here was supported by NSF Grant MCS-70-07332. R EFER ENCES Lessor, V. R., and L. D. Ermsn, "A Retrospective View of the HEARSAY-II Architecture," Fl[t~ Int~n~lovt~ Joint Con [trtnct on Arti [icl~ Intctlif~ct. Cambridge, MA, 1977. Lenin, J. A., and J. A. Moore, "Dialogue Gamosz Meta-communication Structures for Natural Language Interaction," Coenitive Science. 1,4, 1978. Moore, J. A., J. A. Levin, •nd W. C. Mann, "A Goal-oriented Model of Human INalot~ue," flmerlcan Journal of Computational Lin£uistics. microfiche #67, 1977. Mann, W. C., "Dialogue Games," in MODELS OF pI4qLOGUE. K. Hlntlkka, st ~! (ads.) North Holland Press, 1979. 84
1979
20
Plans, Inference, and Indirect Speech Acts I James F. Allen Computer Science Department University of Rochester Rochester, NY Iq627 C. Raymond Perrault Computer Science Department University of Toronto Toronto, Canada MSS IA7 Introduction One of the central concerns of a theory of pra~atics is to explain what actions language users perform by making utterances. This concern is also relevant to the designers of conversational language understanding systems, especially those intended to cooperate with a user in the execution of some task (e.g., the Computer Consultant task discussed in Walker [1978]). All actions have effects on the world, and may have preconditions which must obtain for them to be successfully executed. For actions whose execution causes the generation of linguistic utterances (or s~eeqh acts), the preconditions may include the speaker/wrlter holding certain beliefs about the world, and having certain intentions as to how it should change ([Austin, 1962], [Searle, 1969]). In Cohen [1978] and Cohen and Perrault [1979] i t is suggested that speech acts a• be defined in the context of a plannln~ s~stam (e.g., STRIPS of Fikes and Nllsson [1971]) i.e., as a class of parameterlzed procedures called operators, whose execution can modify the world. Each operator is labelled with formulas stating its preconditions and effects. The major problem of a theory of speech acts is relating the form of utterances to the acts which are performed by uttering them. Several syntactic devices can be used to indicate the speech act being performed: the most obvious are explicit performative verbs, mood, and intonation. But no combination of these provides a clear, single-valued function from form to illocutionary force. For example, (1.a)-(1.e) and even (1.f) can be requests to pass the salt. 1.a) I want you to pass the salt. 1.b) Do you have the salt? 1.c) Is the salt near you? 1.d) I want the salt. 1.e) Can you pass the salt? 1.f) John asked me to ask you to pass the salt. Furthermore, all these utterances can also be intended literally in some contexts. For example, a parent leaving a child at the train station may ask "Do you know when the train leaves?" expecting a yes/no answer as a confirmation. • This research was supported in part by the National Research Council of Canada under Operating Grant A9285. ee Unless otherwise indicated, we take "speech act" to be synon~nnous with "illocutionary act." The object of this paper is to discuss, at an intuitive level, an extension to the work in Cohen [1978] to account for indirect speech acts. Because of space constraints, we will need to depend explicitly on the intuitive meanings of various terms such as plan, action, believe, and goal. Those interested in a more rigorous presentation should see [Allen, 1979] or [Perrault and Allen, forthcoming]. The solution proposed here is based on the following slmple and independently motivated hypotheses: (2.a) Language users are rational agents and thus speech acts are purposeful. In particular, they are a means by which one agent can alter the beliefs and goals of another. (2.b) Rational agents are frequently capable of identifying actions being performed by others and goals being sought. An essential part of helpful behavior is the adoption by one agent of a goal of another, followed by an attempt to achieve it. For example, for a store clerk to reply "How many do you want?" to a customer who has asked "Where are the steaks? e, the clerk must have inferred that the customer wants steaks, and then he must have decided to get them himself. This might have occurred even if the clerk knew that the custamer had intended to get the steaks himself. Cooperative behavior must be accounted for independently of speech acts, for it often occurs without the use of language. (2.c) In order for a speaker to successfully perform a speech act, he must intend that the hearer recognize his intention to achieve certain (perlocutionary) effects, and must believe it is likely that the hearer will be able to do so. This is the foundation the account of illooutionary acts proposed by Strawson [196q] and Searle [1969], based on Grice [1957]. (2.d) Language users know that others are capable of achieving goals, of recognizing actions, and of cooperative behavior. Furthermore, they know that others know they know, etc. Thus, a speaker may intend not only that his actions be recognized but also that his goals be in/erred, and that the hearer be cooperative. (2.e) Thus a speaker can perform one speech act A by performing another speech act B if he intends that the hearer recognize not only that B was performed but also that through cooperative behavior by the hearer, intended by the speaker, the effects of A should be achieved. 85 Th__~e Speech Act Model In the spirit of Searle [1975]; Gordon and Lakoff [1975], and Horgan [1978]. we propose an account of speech acts with the following constituents: (].a) For each language user S. a model of the beliefs and plans of other language users A with which s/he is coenunicating. Including a model of A's model of S's beliefs and plans, etc, (3.b) Two sets of operators for speech acts: a set of surface level operators which are realized by utterances having specific syntactic and semantic features (e.g.. mood), and a set of lllocutionary level operators whlch are performed by perfoming surface level ones. The tllocutionary acts model the intent of the speaker Independent of the form of the utterance. (3.c) A set of plausible Inference rules with which language users construct and reco~nlze plans. It Is convenient to view the rules as either simple or augmented: A couple of examples of simple plan recognition rules are: fAction-Effect Znference] "If agent S believes that agent A wants to do action ACT then it is plausible that 3 believes that A wants to achieve the effects of ACT." [Know-Positive Znferenoe] "Zf S believes A wants to know whether a proposition P is true. then it is plausible that S believes that A wants to achieve P." Of course, given the conditions in the second inference above. S might also infer that A ham a goal of achieving not P. This is another possible inference. Which applies in a given setting is detemlned by the rating heuristics (see 3.d below). Simple rules can be augmented by adding the condition that the recognizer believes that the other agent intended him to perfom the inference. An example of an augmented recognition rule is: "If S believes that A wants S to re.=ognize A's intention to do ACT. then it is plausible that S believes that A wants S to recognize A's intention to achieve the effects of ACT." Notice that the augmented rule is obtained by intrc~uclng "S believes A wants" In the antecedent and consequent of the simple rule. and by interpreting "S recognizes A's intention" as "S comes to believe that A wants." Theme rules can be constructed from the simple ones by assuming that language users share a model of the construction and recognition processes. (3.d) A set of heuristics to guide plan recognition by rating the plausibility of the outcomes. One of the heuristics iS: "Decrease the plausibility of an outcome in which an agent Is believed to be executing an action whose effects he already believes to be true." Soripl~-derived expectations also provide s~e of the control of the recognition process. (3.e) A set of heuristics to identify the obstacles in the recognized plan. These are the goals that the speaker cannot easily achieve without assistance. If we assume that the hearer is cooperating with the speaker, the hearer will usually attempt to help achieve these goals in his response. With these constituents, we have a model of helpful behavior: an agent S hears an utterance from some other agent A. and then Identifies the surface speech act. From this. S applies the inference rules to reconstruct A's plan that produced the utterance. S can then examine this plan for obstanles and give s helpful response based on them. However, some of the inference rules may have been augmented by the recognition of intention condition. Thus. some obstacles may have been intended to be communicated by the speaker. These specify whet tllooutionary act the speaker performed. an Example This may become clearer if we consider an example. Consider the plan that must be deduced In order to answer (4.e) with (..b): (~.a) A: Do you know when the Windsor train leaves? (4.b) S: Yes, at 3:15. The seal deduced from the literal Interpretation is that (4.o) A wants to know whether S knows the departure time. From this goal. 3 may infer that A in fact wants (4.d) by the Know-Positive Znference: (..d) A wants S to know the departure time from which S may infer that (q.e) A wants $ to inform Aot the departure time by the precondition-action Inference (not shown). S can then infer, using the action-effect inference, that (4.f) A wants to know the departure time. S'S response (~.b) indicates that ha believed that both (~.c) and (4.f) were obstacles that S could overcome In this response. However. a sentence such as (4.a) could often be uttered in a context where the literal goal is not an obstacle. For instance. A might already know that $ knows the departure time. Met still utter (4.a). Xn such cases. A's goals are the same as If ha had uttered the request (4.g) When does the Windsor train leave? Hence (~.a) is often referred to as an indirect request. Thus we have described two different interpretations of (q.a): a) A said (q.a) merely expecting a yes/no answer, but $ answered wlth the extra information in order to be helpful; b) A said (4.a) Intending that S deduce his plan and realize that A really wants to ~now the departure time. 86 Theoretically, these are very different: (a) describes a yes/no question, while (b) describes an (indirect) request for the departure time. But the distinction is also IMportant for practical reasons. For instance, assume S is not able to tell A the departure time for some reason. With interpretation (a), S can simply answer the question, whereas with interpretation (b), S is obliged to glve a reason for not answering with the departure time. The distinction between these two cases is simply that in the latter, S believes that A intended S to make the inferences above and deduce the goal (q,f). Thus the inferences applied above were actually augmented inferences as described previously. In the former interpretation, S does not believe A intended S to make the inferences, but did anyway in order to be helpful. Concludln~ Remarks This speech act model was implemented as part of a program which plays the role of a clerk at a train station information booth [Allen, 1979]. The main results are the following: (5.a) (5.b) It accounts for a wide class of indirect forms of requests, assertions, and questions, including the examples in (I). This includes idiomatic forms such as (1.a) and non-idlomatlc ones such as (1.f). It does so using only a few independently necessary mechanisms. It maintains a distinction between tllocuttonary and perlocutionary acts. In particular, it accounts for how a given response by one participant B to an utterance by A may be the result of different chains of inferences made by B: either B believed the response given was intended by A, or 8 believed that the response was helpful (i.e., non-intended). It also shows some ways in which the conversational context can favor some interpretations over others. The main objective of our work is to simplify the syntactic and semantic components as much as possible by restricting their domain to literal meanings. The indirect meanings are then handled at the plan level. There remain several open problems In a theory of speech acts which we believe to be largely independent of the issue of indirection, notably identifying the features of a text which determine literal tllocutlonary force, as well as constructing representations adequate to express the relation between several lllocutionary force indicators which may be present in one sentence (see [Lakoff, 197q] and [Morgan, 1973]). Bibliography Allen, J.F. A Plan-Based Approach to Speech Ac_tt Recognition. Ph.D. thesis, Computer Science Department, University of Toronto, 1979. Austin, J.L. How To Do Thln~s With Words. New York, Oxford University Press, 1962. Brown, G.P. An Approach to Processing Task-Oriented Dialogue, unpublished ms, MIT, 1978. Cohen, P.R. On Znowin 6 What to Say: Plannin~ Speech Acts, TR 118, Computer Science Department, University of Toronto, January 1978. Cohen, P.R. and Perrault, C.N. Elements of a Plan Based Theory of Speech Acts, forthcoming. Cole, P. and Morgan, J.L. Syntax and Semantics, Vol 3: Speech Acts. New York, Academic Press, 1975. Flkes, R.E. and Nllsson, N.J. STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelli~ence 2, 189-205, 1971. Gordon, D. and Lakoff, G. Conversational Postulates, in Cole and Morgan (ads), 1975. Grice, H.H. Meaning. Phil. Rev. 66, 377-388, 1957. Lakoff, O. Syntactic Amalgams. CL__SS 10, 321-3qU, 197q. Morgan, J.L. Sentence Fra~ents and the Notion 'Sentence,' in B.B. Kachru et al. (ads), Issues in Lln~uistics. Urbana, University of Illinois Press, 1973. Morgan, J.L. Towards a Rational Model of Discourse Comprehension, in Proceedin~s __2nd Conf. Theoretical Issues in Natural Language Procesain 6, Champaign-Urbana, 1978. Perrault, C.R. and Allen, J.F. A'Plan-Based Analysis of Indirect Speech Acts, in preparation. Searle, J.R. Speech Acts. New York, Cambridge University Press, 1969. Searle, J.R. Indirect Speech Acts, in Cole and Morgan (eda), 1975. Strawson, P.F. Intention and Convention in Speech Acts. Phil. Rev. 73, q, q39-~60, 196~. Walker, D.E. Understandin~ Spoken Language. New York, North Holland, 1978. 87
1979
21
APPLICATIONS DAVID G, HAYS HeXagram Truth, like beauty, is in the eye of the beholder, Z offer a few remarks for the use of those who seek a point of view from which to see truth in the six papers assigned to this session. Linguistic computation is the fundamental and primitive branch of the art of cumputatlon~ as I have remarked off and on. The insight of yon Neumann~ that operations and data can be represented in the same storage device, is the linguistic insight that anything can have a name in any language. (Whether anything can have a definition is a different question.) I recall surprising a couple of colleagues with this r~ark early in the 1960s, when I had to point out the obvious fact that compillng and interpreting are linguistic procedures and therefore that only in rare instances does a computer spend more time on mathematics than on linguistics. By now we all take the central position of our subject matter for granted. I express this overly familiar truth only for the pragmatic reason that some familiar truths are more helpful than others in preparing for a given discourse. Syntax needs semantic Justification, but semantics has the inherent Justification that knowledge is power. The semantic Justification of syntax is easy: Who would try to represent knowledge without a good gr~--,-r? I have not yet found a better illustration than the tlmstable~ an example that I have used for some years now. Without rules of arrangement and interpretation, the timetable collapses into a llst of places, the digits 0,..9, a~d a few speclal symbols. Almost all of the information in a timetable is conveyed by the syntax, and one suspects that the same is true of the languages of brains, minds, and computers. Syntax needs more than semantic Justification, and pra 8- m-tlcs is ready to serve. Without pragmatic Justifica- tion, the difference between cognitive and syntactic structures is ridiculous. We may find more Justifiers later, but the rediscovery of pragmatlce is a boon to those who grow tired of hearing language maligned, It is easy to make fun of Engllsh , the language of Shakes spears, Bertrand Russell, and modern science. But the humor sometimes depends on the ignorance of the Joker. We find first semantic, then prasmatic, and perhaps later other kinds of Justification for the quirkiness of English and other languages, and the Jokes loss their point. Form, not content, admits of calculation. Since Aristo- tle proceeded in accordance with this rule, I find it surprising that John Locke omitted mention of the simple ideas in reflectlon. (One may recall that Locke knew of simple ideas in perceptlon~-~ellow, warm# amoot~nd considered knowledge to derive from perception and re- flection.) Listing the sidle ideas in reflection selml in fact to be a task for our century, anticipated in part in the L9th century. Predication, Ins~an~isClonp membership, component, g, denoCation~ localization, morali- zation are some candidates that presently show strength. Content, not form, dlsamblguates, A more precise state- ment is that specific and not general knowledge fixes our interpretations of what we encounter, certainly in language and probably also in other channels of peroep- tlon. Thus the great body of knowledge of our culture I of the individual mind, or of the ~asslve database makes lends an appearance of fixedness and stability to the world that simpler minds, cultures, and co~uters cannot get. The general rules of syntax, semantics, and prag- matlcs define the thinkable, allowing ambiguity wheQ some specific issue comes up. In a hash house or a con- versation, understanding and trust come with complete and exact information. Conversation is a social activity. The thinking computer (Raphael's title) may be an artificial mind, but the con- versing computer (William D. Orr's cltle) is an artifi- cial person and must accept the obligations of social converse. Those obligations are massive: "to do justice and love mercy*', "to do unto others as you would have them do unto you", to act only as ic would be well for all to act, to express fully and concisely what is rele- vant, "to tell the truth, the whole truth, and nothing hut the truth". Trust precedes learning. Lest anyone suppose chat I have listed the precepts of our greatest masters in a spirit of fun, I hasten to add this obvious truth from study of our species. Whether the sciences be called social, be- havioral, or human, they tell us that one accepts know- ledge for one's own store only from sources that can be trusted. Nor could wisdom dictate the opposite, since internalized knowledge is inaccessible to test and cor- rection. Is the computer worthy of trust? I have asked this question of students, grading the con- text from simple arithmetic trust (they trust their poc- ket calculators to give accurate sums and products) co complex personal trust (they would not accept the compu- ter as a friend). We have, I chink, no experience with computers that are functionally worthy of crust in any but simple matters. We may be learning to make computers follow the masters' precepts in conversation. Whether their users will ever accept them for what they are worth is hard to predict. If computers grow trustworthy and are assigned important tasks, then when crisis occurs the issue of trust may determine such outcomes as war or peace. Thus the issue is not frivolous. Trust arises from knowledge of origin as well as from knowledge of functional capacity. Genetic and cultural history provide enormous confirmation that a neighbor can be trusted, beyond even broad experience. We can gain only a little knowledge about a friend in the course of a friendship, but we can bring to bear all of our own inherent mechanisms of trust for those that look and smell llke us when crisis occurs. The six papers in this session, written by human beings and selected by persons of authority~ deserve sufficient true~ that the reader may learn from them. The systems that they describe may grow into knowledgeable, semanti- cally and pragmatically effective, syntactically well- formed conversents. Their contributions are to that end, and have the advantage that, by seeking to apply know- ledge they can detect its limits. Science needs application, since contact with reallt 7 tends to realnd us scientists that there are more things out there than are dreamed of in our theories. 89
1979
22
EUFID: A FRIENDLY AND FLEXIBLE FRONT-END FOR DATA MANAGEMENT SYSTEMS Marjorie Templeton System Development Corporation, Santa Monica, CA. EUFID is a natural language frontend for data management systems. It is modular and table driven so that it can be interfaced to different applications and data manage- ment systems. It allows a user to query his data base in natural English, including sloppy syntax and mis- spellings. The tables contain a data management system view of the data base, a semantic/syntactic view of the application, and a mapping from the second to the first. We are entering a new era in data base access. Computers and terminals have come down in price while salaries have risen. We can no longer make users spend a week in class to learn how to get at their data in a data base. Access to the data base must be easy, but also secure. In some aspects, ease and security go together because, when we move the user away from the physical character- istics of the data base, we also make it easier to screen access. EUFID is a system that makes data base access easy for an untrained user, by accepting questions £n natural English. It can be used by anyone after a few minutes of coaching. If the user gets stuck, he can ask EUFID for help. EUFID is a friendly but firm interface which includes security features. If the user goes too far in his questions and asks about areas outside of his authorized data base, EUFID will politely misunderstand the question and quietly log the security violation. One beauty of EUFID is its flexibility. It is written in FORTRAN for a PDP-II/70. With minor modifications it could run on other minl-computers or on a large com- puter. It is completely table driven so ~hat it can handle different data bases, different views of the same data base, or the same view of a restructured data base. It can be interfaced with various data management systems--currently it can access a relational data base via INGRES or a network data base via WWDMS. EUFID is an outgrowth of the SDC work on a conceptual processor which was started in 1973. 1 It is now demon- strable with a wide range of sentences questioning two data bases. It is still a growing system with new power being added. In the following sections we will explore the features that make EUFID so flexible and easy to use. The main features are: • natural English • help • semantic tables • data base tables s mapping tables s intermediate language • security i. NATURAL ENGLISH EUFID has a dictionary containing the words that the users may use when querying the data base. The dictionary describes how words relate to each other and to the data base. Unlike some other natural language systems, EUFID has the words in the sentence related to fields in the data base by the time the sentence is "understood." More will be said about this process in the section on semantic tables. EUFID is forgiving of spelling and grammar errors. If i~ does not have a word in the dlctionary t but has a word that is close in spelling, it will ask the user if a substitution can be made. It also can "understand" a sentence even when all words are not present or ~ome words are not grammatically correct. For example, any of these queries are acceptable: "What companies ship goods?" "Companies?" (list all companies) "What company shop goods?" ("shop" will be corrected to "ship". The plural "companies" will be assumed) Users are free to structure their input in any way that is natural to them as long as the subject matter covers what is in the data base. EUFID would interpret these questions in the same way: "Center shipped heavy freight to what warehouses in 1976?" "What warehouses did Center ship heavy freight to in 1976?" Each user may define personal synonyms if tile vocabulary in the dictionary is not rich enough for him. For example, for efficiency a user might prefer to use "wh" for "warehouse" and "co" for "company". Another user of the same data base might define "co" for "count". 2. HELP Basically, EUFID has only four commands. These are "help", "synonym" (to define a synonym), "comment" (to criticize EUFID), or "quit". These four commands are described in the help module as well as the general guidelines for questions. If the user hits an error while using EUFID, he wlll receive a sentence or two at his terminal which describes the problem. In some cases he will be asked for clari- fication or a new question as shown in these exchanges. User: "What are the names of female secretaries' children?" EUFID: "Do you mean (i) female secretaries or (2) female children?" User: "2" or User: "What is the salary of the accounting department?" EUFID: '~e are unable to understand your question because "salary of department" is not meaningful. Please restate your question." If the description is not enough to clarify the problem, the user can ask for help. First, HELP will give a deeper description of the problem. If that is not enough, the user can ask for additional information which may include a llst of valid questions. 3. TABLES EUFID is application and data base independent. Thls independence is achieved by having three sets of tables-- the semantic dictionary tables, the data base tables, and the mapping tables which map from the semantic view to the data base. Conceivably, a single semantic view could map to two data bases that contain the same data but are accessed by different data management systems. 91 3.1 SEMANTIC TABLES The semantic view is defined by an application expert working with a EUFID expert. Together the 7 determine the ways chat a user mlghc want to talk about the data. From this, a llsC of words is developed and the basic sentence structures are defined. Words are classed as: entitles (e.g., company) events (e.g., send) funcClons (after 1975) parrs of a phrase or idiom (map coordlnaCes) connectors (co) system words (the) anaphores (ic) two or more of the above (ship an enClCy plus ship an event) An entity corresponds approximately co a noun and an event co a verb. Connectors are preposlClons which are dropped after the sentence is parsed. System words are conjunctions, auxiliaries, and decermlners whloh partici- pate in determining meaning buc do noC relate co data base fields. Anaphores are words chac refer Co previous words and are replaced by them while parsln 8. Basically then, the only words chat relate co the items in the data base are entities, events, and funcclons. Entities and events are defined using a case structure representation which combines synCacclc and sm---clc information. Lexlcal items which may co-occur with an entity to form noun phrases, or wlch a verb co form verb phrases, fill cases on the enClCy or event. Cases are disclngulshed by the sac of possible fillers, the possible connectors, and the syncactlc position of the case relaclve co the antic 7 or event. A case may be specified as opclonal or obllgacory. A sense of an entlCy or event is defined by the sac of cases which form a dlsCincC noun phrase or verb phrase type. Three senses of the word "ship" are illustrated in Figure i. ~IPPING CC~ANY I I S~O~. aT" SlIP I- - OJL/Ga~aY } 08~lcaT0aT ~ ,m, I'~- "," I"~. 0~3/~m, AFro. mI~rr CASK F C~Jl G CASE C IN =- Figure I. The flrsc sense of "ship" accounts for acClve voice verb phrases wlch the pattern "Companies ship goods CO companies in year.*' Examples are: .. Whac companies ship to Ajax? In 1976, who shipped light freight co Colonial? This sense of "ship" has ~wo obligatory cases, A and C, and ~ao optional cases B and H. The face chac the "year" case can be moved opclonally wichln the phrase is noC represented within the case structure, buc is recoEnlzed by the Analyzer, which assigns a structure Co the phrase. The second sense of "ship" accounts for the passive con- 8CrucClon of the type "Goods are shipped Co company by company." Examples are: Was llghc frelghc shipped Co Ajax in 19787 What goods were shipped Co Ajax by Colonial? By whaC companies in 1975 was hesw/ freight shipped Co Colonial? Case O has the same filler as case B, but precedes "ship" and is obligatory. Case g has the same filler as case A, buc follows "ship", has a dlfferenc con- nector, and is optional. That is, sense i of "ship" is daflned as the associaclon of "ship" with cases A,B,C. Sense 2 is the associ&clon of "ship" with cases C,D,E. Sense 3 of "ship" describes the nominallzed form "shlpmenc" and expliclCly captures the informaclon Chac shlpmencs involve goods and reflect transacClons between companies. An *~-mple is: '~taC is the cransacclon number for the shlpmanc of bolts from Colonial co Ajax?" 3.2 DATA BASE TABLES The data base cables describe the data base as viewed by the data management system. Since all dace mamags- menC syscemn deal with dace iCmma organized into groups chac are related through links, ic is possible co have a co~n cable format for any dace management system. The dace bus cables actually consist of two cables. The CAN table contains information about groups and dace iC ~a. A group (also called entity or record in ocher systems) is Idenclfled by the group name. A dare Icam in che CAN cable consists of Che data ices "mine, che grOUp CO which IC belongs, a uniC code, an output Idenclflar, and some field type informaClon. Notably missing is anything about the byte wichln the record or the number of bytes. ~UFID accesses the dace base through s data management sysCom. Therefore, the dace can be reorganized ~rLChou¢ changing the EUFID cables aa long as the dace iCeml retain their names and chair groupings. The second data beam cable is the P~L cable which contains an encz 7 for each group with its links co ocher groups. For nscwork dace bases, cha link is the chain name for the primary chain chac connects master and derail records. For relational dace bases, every dace item pair in the two groups chac can have the same value is a potential link. 3.3 MAPPING TABLES The mapping cablu cell the program how to gec from the semantic nods, as found in the semantic dictionary, co the dace base field names. Each entry in the mapping table has a node name followed by two parts. The first parr describes the pacCsrn of cases and their fillers for chac node name. The second parr is called a production and ic gives the mapping for each case filler. A node may map co a node higher in the sentence tree before iC maps co a dace bus item. For exalpls, "company name" in the question '~at companies are locacnd in Los Angeles?" may map to a group containing ge~sral company ~n~ormacion. However, "company name" in the question "W~'mt companies ship Co Los Angeles?" may map to a group concain~ng shipping company information. 92 Therefore, it is necessary to first map "company name" up to a higher node that determines the meaning. At the point where a unique node is determined, the mapping is made to a data item name via the CAN table. This data item name is used in the generatlon of the query to the data management system. 4. INTERMEDIATE LANGUAGE EUFID is adaptable to most data management systems with- out changes to the central modules. This is accomplished by using an intermediate language (IL). The main parts of EUFID analyze the question, map it to data items, and then express the query in a standard language (IL). A translator is written for each data management system in order to rephrase the IL query into the language of the data management system. This is an extra step, but it greatly enhances EUFID's flexibility and portability. The intermediate language looks like a relational re- trieval language. Translating it into QUEL is straight- forward, but translating It to a procedural language such as WWDMS is very difficult. The example below shows a question with its QUEL and WWDMS equivalent. QUESTION: WHAT ARE THE NAMES AND ADDRESSES OF THE EXECUTIVE SECRETARIES IN R&D? INGRES IL: RETRIEVE [JOB.EHFLOYEE,JOB.ADDRESS] WHERE (DIV.NAHE = "R&D") AND (DIV.JOB = JOB.NAHE) AND (JOB.NAME = "SECRETARY") AND (JOB.CLASS = "EXECUTIVE") QUEL: range of div is dlv range of Job is Job retrieve (Job.employee,Job.address) where dlv.name = "R&D") and dlv. Job= Job.name and Job.name = "secretary" and Job.class = "executive" W ~ IL: RETRIEVE [JOB.EMPLOYEE,JOB.ADDRESS] WHERE (DIV.DNAME - "R&D") AND (DIV.DIV JOB CH - JOB.DIV_JOBCH) AND (JOB.JNAME - "SECRETARY") AND (JOB.CLASS - "EXECUTIVE") WW'DMS QUERY: INVOKE 'WWDMS/PERSONNEL/ADF' REPORT EUFID-1 ON FILE 'USER/PASSWD/EUFID' FOR TTY QI. LINE "EMPLOYEE NAME =",EMPLOYEE Q2. LINE "ADDRESS "",ADDRESS El. RETRIEVE E-DIV WHERE DNAME " "R&D" WHEN R1. R2. RETRIEVE E-JOB WHERE JNANE - "SECRETARY" AND CLASS - "EXECUTIVE" WHEN R2 PRINT ql PRINT Q2 END 5. SECURITY EUFID protects the data base by removin B the user from direct access to the data management system and data base. At the most general level, EUFID will only allow users to ask questions within the semantics that are defined and stored in the dictionary. Some data items or views of the data could be omitted from the dlctlonazy. At a more specific level, EUFID controls access through a user profile table. Before a user can use EUFID, a 93 system person must define the user profile. This cable states which applications or subsets of applications are available to the user. One user may be allowed Co query everything that is covered by the semantic dictionary. Another user may be restricted in his access. The profile table is built by a concept graph editor. When a new login id is established for EUFID, the system person gives the application name of each application that the user may access. Associated with an applicatlon name is a set of file names of the tables for the appli- cation. If access is to be restricted, a copy of the CAN and mapping function tables is made. The copies are chanEed to delete the data items which the user is not to know about. The names of the restricted tables are then stored in the user's profile record. EUFID will still be able to find the words that are used co talk about the data item, but when EUFID maps the word to a removed data item it responds to the user as though the sentence could not be understood. 6. CONCLUSION EUFID is a system that makes data base access easy and direct for an end user so that he does not need to go through a specialist or learn a language to query his own data base, It is modular and table driven so that it can be interfaced with different data management systems and different applications. It is written in hlgh-level transportable languages to run on a small computer for maximum transportability. The case grammar that it uses allows flexibility in sentence syntax, ungrammatical syntaxj and fast, accurate parsing. If the reader wants more detail he is referred to refer- ences 2-4. 7. RE F~E~CES 1. Burger, J., Leal, A., and Shoshanl, A. "Semantic Based Parsing and a Natural-Language Interface for Interactive Data Management," AJCL Microfiche 32, 1975, 58-71. 2. Burger, John F. "Data Base Semantics in the EUFID System," presented at the Second Berkeley Workshop on Distributed Data Management and Computer Networks, May 25-27 1977, Berkeley, CA. 3. Walner, J. L. "Deriving Data Base Specifications from User Queries," presented at the Second Berkeley Workshop on Distributed Data Management and Computer Net-works, May 25-27, 1977, Berkeley, CA. 4. Kameny, I., Welner, J., Crilley, M., Burger, J., Gates, R., and Brill, D. "EUFID: The End User Friendly Interface to Data Management Systems," SDC, September 1978.
1979
23
WORD EXPERT PARSING l Steven L. Small Department of Computer Science University of Maryland College Park, Maryland 20742 This paper describes an approach to conceptual analysis and understanding of natural language in which linguistic knowledge centers on individual words, and the analysis mechanisms consist of interactions among distributed procedural experts representing that knowledge. Each word expert models the process of diagnosing the intended usage of a particular word in context. The Word Expert Parser performs conceptual analysis through the Interactlons of tl~e individual experts, which ask questions and exchange information in converging on a single mutually acceptable sentence meaning. The Word Expert theory is advanced as a better cognitive model of natural language understanding than the traditional rule-based approaches. The Word Expert Parser models parts o~ tSe theory, and the important issues of control and representation that arise in developing such a model [orm the basis of the technical discussion. An example from the prototype LISP implementation helps explain the theoretical results presented. [. Introduction Computational understanding of natural language requires complex Interactions among a variety of distinct yet redundant mechanisms. The construction of a computer program to perform such a task begins with the development of an organizational framework which Inherently .incorporates certain assumptions about the nature ot these processes and the environment in which they take place. Such cognitive premises affect nro?oundly the scope and substance of computational ~nalysis for comprehension as found in the program. This paper describes a theory of conceptual parsing which considers knowledge about language to be distributed across a collection of procedural experts centered on individual words. Natural language parsing with word experts entails several new hypotheses about the organization and representation of linguistic and pragmatic knowledge for computational language comprenension. The Word Expert Parser [1] demonstrates hpw the word expert qTt~T~ed w£~h certain ocher choices oaseo on previous work, affect structure and process in a cognitive model of parsing. The Word Expert Parser is a cognitive model of conceptual language analysis in which the unit of ltngu~stic knowledge is the word and the fqcu~ o~ research ts the set or processes unoerlyinR comprehension. The model is aimed directly at problem~ of word sense ambiguity and idiomatic expressions, and in greatly generalizing the notion of wora sense, promotes these issues to a central place in the study of language parsing. Parsing models typically cope unsatisfactorily with the wide heterogeneity of usages of particular words. If a sentence contains a standard form of a word, it can usually be parsed; if it involves a less prevalent form which has a different part of speech, perhaps it too can be parsed. Disti.nguishing amen 8 the ~any senses of a common vero, adjective, or pronoun, tar example, or correctly translating idioms are rarely possible, At the source of this difficulty is the reliance on rule-based formalisms, whethar syntactic or semantic (e.g.. cases), which attempt to capture ~he linguistic contributions inherent in constituent chunks or sentences that consist of more than single words. A crucial assumption underlying work on the Word Expert Parser is that the ~undamental unit of linguistic Knowledge is the word. and that understanding its sense or role in a particular context is the central parsing process. In the parser to be described, the word expert constitutes the kernel of linguistic knowled~nd zts representation the e~emental data structure. IE is procedural in nature and executes directly as a process, cooperating with the other experts for a given sentence to arrive at a mutually acceptable sentence meaning. Certaln principles behind the parser d 9 nqt follow directly from the view or worn primacy, out ~rom other recent theories of parsing. The cognitive processes involved in language comprehension comprise the focus of linguistic study of the word expert approach. Parsin8 is viewea as an inferential process where linguistic knowledge of syntax and semantics and general pragmatic knowledge are applied in a uniform manner during IThe research described in this renor~ .is funded by the National Aeronautics and Space Admzn~stratton under grant , n umbe, r NSC-7255. Their support is gratefully acKnowleageG, Interpretatlon. This methodological position closely follows that of Rlosbeck (see [2] and [3 ]) and Schank [4]. The central concern with word usage and word sense ambiguity follows similar motivatlons of Wllks [5]. The control structure of the Word Expert Parser results from agreqment .with ~he hypothesis of .Harcus that parsing can he none aetermzntsttcally and ~n a way tn Dhlcn information ,gained through interpretation is permanent [6]. Rieger ~ view of inference as intelligent secectlon tmong a number of competing plausible alternatives {7J of course forms the cornerstone of the new theory. Hi~ ideas on word sense selection for language analysis ([8] and [9~) and strategy selection for general problem solving [10] constitute a consistent cognitive perspective. Any natural language understanding system must incorporate mechanisms to perform word sense dlsa?biguatlo~ in. the context .of ape, n-ended world gnow~eoge, rne Importance at these mechanisms tar wore usage diagnosis derives from the ubiquity of local ambiguities, and brought about the notion chat ~hey be made the central processes of computational analysls an 9 understanding, Consideration of almost any Engllsn content word leads to a realization of the scope of the problem -- with a little time and perhaps help from the dlctlonaFy , man~.dlstinct usages can ee.id~ntifl~d. As.a stmpie lllustrarzon, several usages earn tar the worus "heavy" and "ice" appear in Figure I. Each of. these seemingly" benign words exhibits a rich depth of contextual use, An earlier paper contains.a list at almost sixty verbal usages for the word "take" [llJ. The representation of all contextual word usages in an active way t~at insures their utility for linguistic dlagnasis led to the notion of word experts. Each word expert is a procedural e n t i t ~ ~ f all posslblq contextual interpretations of the -word it represents. = Whe~ placed in a context formed by.expqrts for thg.othe ~ wares In a sentence, earn expert ShOUld De capaole or sufficient context-problng and self-examination to determine successfully' its functional or semantic role, and further, to realize the nature of that function or the precise meaning of the word. The representation and control issues involved in basing a parser on word experts are discussed below, following presentation of an example execution of the existing Word Expert Parser. 2. Model Overview The Word Expert Parser successfully parses the sentence "The deep ~hilosopher throws the peach pit into the aeep pit," through cooperation among the appropriate word. experts, Initialization of ~he parser consists or retrlevln~ tr~ experts for "the", "deep', "philosopher", "throw", s", ~ 2An Important aeeumption of the word expert viewpoint is that the set or sucn contextual wars usages is not only finite, but fairly small as well. 3The verspectlve of viewing language through lexlcal contribution~ to structure a~d meaning has naEurallv led to the development of wold experts for co~mon m?rphemes that are not war as ~ana even, experimentally, for ~unctuatlos), Especially important is the word expert tar "-ins', which aids significantly i n helpinR co Some word senses of "heavy" 1. An overweight person is politely called "heavy": "He has become quite heavy." 2. Emotional music is referred to as "heavy": "Mahler writes heavy music." ~. An intensity of precipitation is "heavy": "A heavy snow is expected today." Some word senses of "ice" I. The solid state of water is called "ice": "Ice melts at 0Oc. " 2. "Ice" participates In an idiomatic neminal describing a favorite delight: "Homemade ice cream is delicious." 3. "Dry Ice" is the solid state of carbon dioxide: "Dry ice will keep that cool ;11 day." ~. "Ice" or "iced" describes things that have been cooled (sometimes with ice): "One iced tea to go please." 5. "Ice" also describes things made of ice: "The ice sculptures are beautiful~" 6,7. "Ice hockey" is the name of a popular sport which has a rule penelizln~ an action called "icing": "Re iced the puck causing a face-off." ~. The term "ice box" refers to both a box containing ice used for cooling foods end a refrigerator: "This ice box isn't plugged in~" Flsure 1: Example contextual word usages ".over", and ~o forth, from a dis~ flle~ and .or~anizin 8 them along with data repositories cal~e~ wor~ oIns in a left to right order in ~he sentence level wo~k~pace. Note that three copies ot t T~-3R~...t ~or "the" anb c.~o cop.ies of each expert for "deep" and "pit" appear in th~ worKspace. Since each expert executes as a process, each process Inetantlatlon in the workspa..ce must be put into an executaole state. At this point, the parse is ready to begin. The word expert for "the" runs first, and is able to terminate immediately, creating a new concept designator (called a concept bin and participating in the concept level worksp~f~"~iclT-'will eventually hold the data the intellectual philosopher described in the input. Next the "deep" expert runs, and since "deep" has a number of word senses,5 is unable to terNinate (i.e~, complete its dlscriminetlgn task)..Instead,it ~uspenas its execution, stating the conditions upon winch it should be resumed. These conditions take the form of associative trigger patterns, and are referred to as disambiguate expressions Involving gerunds or participles such as "the man eat ir~ tiger". A full discussion ot thls will appear in [12]. 4Al~hough I call them "processes". word experts are actually coroutlnes resembling CONNIVER's generators [tS], and even more so, the stack groups of the MIT L~SP Machine [14]. 51t should be clear that the notion of "word sense" as used here encompasses what might more traditionally be ~escr.ibea as "contextua~ ~orn usage", Aspects o~ a word token's linguistic envlromnent constitute Its broadened "sense". restart demons. The "deep" expert creates .a restart demon co wake l'C up when the sense ot the nominal to its right ( l .e., "~hllosopher") becomes knoWn. The exper~ f.or "philosopher now runs, observes the co.ntrol state ot the parser, ant contributes the tact Chat One new concept refers to a person e.ngaged in the study of philosophy. As this expert terminates, the expert tot "=eep" resumes spontaneously, and, constrained by the fact chat "deep" must describe an entity that can be viewed as a person, it finally terminates successfully, contributing the fact that the person is intellectual. The "throw" expert runs next and successfully prunes away several usages of "throw" for contextua, reasons. A major reason for the semantic richness of verbs such as "throw", "cake", and "Jump", is that In context, each interacts strongly with a number of succeedin8 pre~ositions and adverbs to form distinct meaninBs, The woro expert approach easily handles this grouping together or words to torn larger word-like entities. In the particular case of verbs, the expert for a word like ."throw" simply exam.ines.i~.s rSght lex ical n.eighbor, an~ oases its oWn sense alscrtmlnet2on on the co(Rolnetlon or ~ at it .expects co find there, what It actually finds ere, an~ what this neighbor tells it (if It Soas so rat as to ask). No interesting p.article follows throw" in the current exampze, out It snoulo oe easy to conceive or th.e basic expert probes to discriminate the sense of "throw" wnen ;ol-owed by "away", "up", "out" ~ "in the towel", or other woras or wore groups, when no such word rollows "throw". as Is the case nere, its expert slmp-y waits for the existence of an entire concept to Its right, to determine if it meets any of the requirements .~hat would make the correct contextual interpretation of ' throw" different trom the expected "propel by moving ones arm" (e.g., "throw a party'.'). Before any such substantive conceptual activity takes place~ however, .t~ "S" expert ~uns arm ~ontri~uCes Its stannaro morphological information to throw "s data bin. This execution of the "s" expert does not, of course, affect "throw"' s suspended status. The "the" expert for the second "the" in the sentence runs next, and as in the previous case, creates a new con.cep~ bin to represent the da.~a about the no nina~ and des crlptlo.n, to come. Lne "peecn" expert realizes that It coulo oe either a noun or an adjective, and thus attempts what ~ call a "pairing" operation with its right neighbor. It essentially asks the expert for "pit" if the two ot them form a noun-noun pair. To determine the answer, ooth "pit" and "peach" have access to the entire model of linguistic and pragmatic knowledBe. Durtn~ this time. ~peach" is in a st.a~e called "attempting pairing" which Is nlzrerent trom the "suspended" state of the "throw" ex.~.ert. "Pit" answers back that it does pair up with "peach' (since "pit" is aware of its run-time context) and enters the "rea.dy" state. "Peach".now ned:ermines its c.orre~t sense and t;erm~netee: An.d ~nc~ only one mean%ngrul sense ~or'plt remains, the pit expert executes quickly, . t.ermlnattng with the contextually a~pro~riace "trulC pit" sense. As ic terminates, the piC. expert closes off the concept b.in In which It part~cipaces, spontaneously resumins the "throw" expert. An examination of the nature of fruit pit.a reveals that they are pergect.ly suited to propelling with ones. arm, ar~ thus, the "th.row" expert terminates successzul~y, contributing its wore| sense to its event concept bin. .The "lnto~ expert, runs next, opens a concept bin ~of t~pe 'setting") rot the time, location, or situation about to be described, and suspends itself. On suspension, "lnto"'s expert posts an associative restart condition that will e.nable .its re.sumptlon when a new p~cture concept ~s opened to the right. This initial action CaKes p~ace rot most prepositions. In certain cases, if the end of a sentence is reached before an appropriate expected concept is opened, an expert will take alternative action. For example, one of the "in" experts restart trigger patterns consists of control state data of Just this kind -- if the end of a sentence is rear.had .and no. conceptuql object, for the sect.ing creaceo oy "In" has oeen round, the "in" expert wxl~ resume nonetheless, and create a default concept t or perform some kind of intelligent reference aeterminatlon. The sentence "The doctor is In." illustrates this point. In the current example~ the. "the" expert that executes lm.med~ately alter t_.nto"'s suspension creates the exporter.picture concept. The wor.d ex~er~..for."deep" then rune ano, as oe~ore, cannot Immedlately olscrlmlnate among Its several se.nses. ."Deep" chug suspend.s, waiting tor the expert rot the word to Its right to neap. At h.ls point, there are two experts suspended, although ~.ne control flow remalns ralrly simple, other examples exist in whlch a complex set or conceptual dependencies cause a number or exper.~s to De suspendedslmultaneously. These situations usuaA.~y resolve themes+yes wl~_h a ca§qadlns o~ expert res,-,ptlons and terminations. In our seep ~c example, "deep" ~oets expectations on the central tableau of global control state Knowledge, and waits rot "pit" to terminate • "PIt"' s expert now runs, and since thls 10 bulletin board contains "deep"'s expectations of a ~ . oI~, or printed matter, "pit" maps immediately onto a large hole in the ground. This in turn, causes both the resumption and termination of the "deep" expert as well as the closure of the concept bin to whlch the~ oelong. At the closing of the concept bin, the "into expert resumes, marks its concept as a location, and terminates. With all the word experts completed and all concept bins closed, the expert for ".'" runs and completes the parse. The concept level workspace now contains five concepts: a picture concept designating an intellectual philosopher, an event concept representing the throwing action, another picture concept describing a fruit pit which came from a peach, a setting concept representing a location, and the picture concept which describes precisely the nature of this location. Work on the mechanism to determine the schematic roles of the concepts has just begun, and is described briefl~ later. A program trace that shows the actions ot the Nora Expert Parser on the example just presented is available on request. 3. Structure of the Model The organization of the parser centers around data repositories on two levels -- the sentence level workspace contains a word bin for each word (and sub-lexical morpheme) of the input and the concept level workspace contains a concept bin (described above) for each concept referred to in the input sentence. A third level of processing, the schema level workspaee, while not yet implemented, will contain a schema for each conceptual action of the input sentence. All actions affecting the contents of these data bins are carried out by the word expert processes, one of which is associated with each word bin in the wo rkspace. In addition to this first order information about lexical and conceptual objects, the parser contains a central tableau of control state descriptions available to any expert that can make use of self referential knowledge about its own processing or the states of processing of other model components. The availability of such control state information improves considerably both the performance and the psychological appeal of the model -- each word expert attempting to disambiguate its contextual usage knows precisely t~e progress of its neighbors and the state of convergence (or the lack thereof) of the entire parsing process. Word Experts The principal knowledge structure of the model is the word sense discrimination expert. A word expert represents the the linguistic knowledge required to dlsamblguate the meaning of a single word in any context. Although represented cumputationslly as coroutlnes, these experts differ considerably from ad hoc LISP programs and have approximately the same ~elatlon ~o LISP as an augmented transition network [15] grammar. ° 2use as rh~ graphic represeptatlon of an augmented transltlon networ~ aemonstrates the basic control paradigm of the ATN parsing approach, a graphic representation for word experts exists which embodies its functional framework. Each word expert derives from a branching discrimination structure called a word sense discrimination network or sense net. A sense nec consists of an ordered se~ of • /~tr~Ti~g (the nodes of the network), and for each one, the set of possible answers to that question (the branches emanating from each node). Traversal of a sense network represents the process of converging on a single contextual usage of a word. The terminal nodes of a sense net represent distinct word senses of the word modeled by the network. A sense net for the word "heavy" appears in part (a) of Figure 2. Examination of this network reveals that four senses are represented -- the three adjective usages shown in Figure 1 plus the numinal sense of "thug" as In "Joe's heavy told me to beat it." Expert Representation The network representation of a word expert leaves out certain computational necessities of actually using it for parsing. A word expert has two fundamental activities. (I) An expert asks questions about the lexical and conceptual data being amassed by its neighbors, the control states of various model components, and more general issues requiring common sense or knowledge of the physical world. (2) In addition, at each node an expert performs actions to affect the lexical and conceptual contents of the workspaces, the control states of itself, concept bins, 6An ATN without arbitrarily complex LISP computations on each arc and at each node, that is. 7In addition to common sense knowledge of the physical world, this could include information about the plot, characters, or focus of a children's story, or in a specialized domain such as medical diagnosis [17], could include highly domain specific knowledge. and the parser as a whole, and the model's expectations. The current procedural representation of the word expert for "heavy" appears as part (b) of Figure 2. Each word expert process Includes three components -- a declarative header, a start node, and a body. The header provides a description of the expert's behavior for purposes of inter-expert constraint forwarding. If sense discrimination by a word expert results in the knowledge that a word to its right, either not yet executed or suspended, must map to a specific sense or conceptual category, then it should constrain it to do so, thus helping it avoid unnecessary processing or fallacious reasoning. Since word experts are represented as processes, constraining an expert consists of altering the pointer to the address at which it expects to continue execution. Through its descriptive header, an expert conditions this activity and insures that it takes place without disastrous consequences. Each node in the body of the expert has a type deslgnated by a letter following the node name. either Q (question), A (action), S (suspend), or T (terminal). By tracing through the question nodes (treating the others as vacuous except for their gore pointers), a sense network for each word expert process can be derived. The graphical framework of a word expert (and thus the questions it asks) represents its principal linguistic task of word sense disamblguatlon. Each question node has a type, shown following the Q in the.node -- MC tmultiple choice), C (conditional), YN (yes/no/, and PI (posslble/Imposslble). In the example expert for "heavy", node nl represents a conditional query into the state of the entire parsing process, and n?de n[2 a multiple choice question involving the conceptual nature of the word to "heavy"s right in the input sentence. b Multiple choice questions typically delve into the aslc relations among ob3ects ann actions zn the world. For example, the question asked at node n12 of the "heavy" expert is typical: "Is the object to my right better described as an artistic object a a form of precipitation, or a physical object? Action nodes in the "heavy" expert perform such tasks as determining the concept bin to which it contributes, and pqstin 8 expectations for the word to its right. In terms ot its side effects, the "heavy" expert is fairly simple. A full account of the word expert representation language will be available next year [12]. Expert Questions The basic structure of the Word Expert Parser depends principally on the role of individual word experts in affectlug.(1) each other:s actions and ~2) the neclaratlve result or computatlonal analysis. ~xperts affect each other by posting expectations on the central bulletin board, constraining each other, changing control states of model components (most notably themselves), and augmenting data. structures in. the workspeces. ° .They contribute to the conceptua£ ans ecnematlc result ot toe parse by contrlbuting object names, descrlptions~ schemata, ane other useful data to the concept level workspace. To determine exactly what contributions .to make, i.e.j the accurate ones In the particular run-tlme context at handj the experts as~ questions ot various kinds about the processe sot the model and the world at large. Four types of questions may be asked by an expert, and whereas some queries can be made in more than one way, the several question types solicit different kinds of information. Some questions requlre fairly involved inference to be answered adequately, and others demand no more than simple register lookup. This variety corresponds well, in my opinion, with human processing involved in conceptual analysis. Certain contextual clues to meaning are structural; taking advantage of them requires solel~ knowledge of the state of the parsing process (e.g., 'building a noun prase"). Other clues subtly present themselves through more global evidence, usually having to do with linking together high order information about the specific domain at hand. In story comprehension, this involves the plot, characters, focus of attention, and general social psychology as well as common sense knowledge about the world. Understanding texts uealing with specialized subject matter requires knowledge about that particular subject, other subjects related to it, and of course, common sense. The questions asked by a word expert in arriving at the correct contextual interpretation of a word probe sources of both kinds of information, and take different forms. 8The blackboard of the Hearsay speech understanding system [~6]. ~s anelggous to the entire wormspace ot the parser, xnoluaxng the word bins, concept bins, and oulletin board. ii (~ 's the current~ oncept of type) "viceure"? / yes ~ es the word on~ right contribute to the current / ,concept? ,/ . Is the current conceptual object I better described/ as arc, e phyeob$,~ SERIOUS-OR- INTENSE- EMOTIONAL 0UANTITY MASS THUG LARGE-PHYS ICAL- (a) Network representation of "heavy" expert [word-expert heavy <header category (PA • nl)] ~sense <descriptors (LARGE-PHYSICAL-MASS . nil) (INTENSE-~UANTITY . nO3) (SERIOUS-OR-EMOTIONAL . uS2)>]> <start nO> <exnert [n~:A (~E~USE) (NEXT nl)] [nl:~ C parser-state t (open-picture . n2) [rS:A (CONCEPT new PICTURE) ~rr .4 ] (NEXT nlO)] [nlO:A (EX~C~(EX~R~ (r,,)Cr") vio,/pp~ie~P~ p~cART)I~ZnTZON) ~EX~C"I' (rw) view/PP I~¥SOBJ) (N~XT nil)] [nll:S wait-for-r~lght-word ~RES_U_ME.~trlgger 'expert-state (ha) 'terminated)) ~u~u~ t~rst) (NEXT nl2)J tel2:0 HC vlew/PP (rw) tart . ritz) ~. ~praclpitation~ nc~) ~pnysobJ . ntl)I [ntl:T P~ LARGE-PRYSICAL-MASS] [nt2:T PA SERIOUS-OR-EMOTIONAL] [nCS:T PA INTENSE-AMOUNT]>] (b) Process representation of "heavy" expert: Figure 2: Word expert representation The explicit representation of control state and structural Informeclon racilltates i~s use in pars in~.-- conditional and yes/no questions petters s~'nple lookup operatlona In the PIAN~ER-IIke associative dac~ base [18] chef stores the workapace data. ~uestlons about the plot or a story or ice cheracfiers, or common sense queetlona requLrtn~ spatial or temporal stmul, attona ~}re, bes.C pnrasee as possible/impossible ~or yes/no/maybe) q~est$on~, Sometimes during sena~ 4iscrtm~n~tion,. thq p-ausl~illty or some gene.ra~ tgcC~eaus to tee pursult or ~ifferent Information than Its lmpzauatbtlity. Such aline t lone occur with enough frequengy to justify a spec~a~ type or questlon to ueal wtth them. The Importance of HulClple Choice Multiple choice questions comprise the central inferential component of word experts. They derive from R1eger' s notion that intelligent selection among competin 8 alternatives by . relative .differencing represents an important aspect oz human proe~em so~vlr~ [7]. The Word Expert Parser, unlike certain standardized tests, prohibits multiple choice questions from contalnlnR a "none of the above" choice. Thus, ehey demand tee most "reasonable" or "consistent" choice of pot ential.ly .unep~ealt.ng answers. What does a child (or adult) GO wnen zacea wlcn a sentence that seems Co state. an implausible proposition or reference lmplauqible objects? He surely does his best Co make sense ot the sentence, no master what ie says. Depending on the context, certain intelligent and literate people create metaphorical interpretations for such sentences. The word expert approach interprets metaphor, idiom s and "normal" text wleh the same mechanism. Multiple choice questions make this possible hut anewe ring them may require tremendously complex processing, A substantial knowledge representation zormalism based on semantic networks, such as ~RI. (191, with mulclple perspectives, nrocedural attachment, and intelligent aescripCion matching, must be used to represent in a uniform way both general world knowledge and knowledge acguired through textual Interprecatlon. In KRL terms, a multiple choice question such as "Is the object RAIN more llke ARTISTIC-OBJECT, PHYSICAL-OBJECT, or PRECIPITATION?" must be answered by appeal co ~he units representing the four notions involved. Clearly, RAIN can be viewed as s PHYSICAL-OBJECT; much less so as an ARTISTIC-OBJECT. However, in almost all contexts, RAIN is closest conceptually to PRECIPITATION. Thus, this should be the answer. This multiple choice ge;~antsqa I~tS many uses ~n c onceptuaJ~, parslng ar~. :ul~Tscale lanEuage comprene~Jlon as we~ as lngenera- problem, solvln K [201. That any rraEment ot text (or ocher n, lan sensual input) has some interpretation from the.point of vi.ew o.~ a parcicula.r read.st constitutes, a zunaamenta~ unaerly~ng ~dea oz the worn expert approacn. Exper~ Side Effects Word experts take two klnds of actions -- actions explicitly intended to affect sense discrimination by other experts)end actions to eugme`nC the conceptual infgrmaCion .chat constitutes the result or a parse. Each path throuKn a sense network represents a distinct usage of ~he modeled wordt and at each seep of the way, the ~orcl expert must update, the model Co r efle.ct the .state_of ~Cs processln 8 end t~e extent of 1is Kno.wieoge.. lee heavy" ~per~ of Figure 2(b) exhibits severaA o~ these actions. Nodes n2 and ~ of this word expert process represent."heavy"' s decision about the concept bin (i.e., ;pnceptua, notion) in which It partlclpates. I~. the first case. It declaes Co contribute to tile same Din as its left neighbor; in the second, it creates a new one, eventually. [o cunts.in the conceptual data provided by l~.sml~.ana ~ernape ocher experts to its r1.sht.. At node nius heavy posts Its expectations regarolr~ the word to ice right on the. central .bulletin board. When it tampora~'ll),, suspect, s execution at none nil, its "`suepand. ed' control state description also appears on cnls taD.Leeu, .Contro..~ state descriptions such. as "suspended"~ terminates' , "attempting. ~airing" Ls.ee above) ~ and "reaay" are posies on this ou~etin board, whlcn contains a state designation for each expert and concept in the workJpmce, as well as a description of the parser state a~ a whole. Under res~.rioted condLCions~ an expert may arzect the state oeecrlptione on thls tao~eau, an expert that has determined its nominal role, may, for example, chan~e the .state of. its.concept .~the one to which lC contributes) to "oounaea" or ' closed", depending on whether or. not all or.her experts participating in chat concept nave ce~inated. Worn experts .may post expectations, on the bulletin .board co .tacilitace handshaking oetween themselves an~ SUDsequently executing neighbors. In the example .parse; the "de`ep" expert expects an entity t~aC It can uescr~oe; oy saylng so In de~ail,..~t e mi.bles the "pit" exper~ Co eermloaCe succeseru.lly on flrst runn1~, somethln8 1c would not ~e able to do other~r~se. The .initial execution of a word. expert _ must accomplien certain goa~s or a structura± nature. It tee word participates ~n a noun-noun pa~r, thls must be determined; in either case, the expert must determine the concept bin to which it concribucAs all of its descriptive data throughout the parse. ~ This concept 9An exce.pcion arises when an expert.creates a default concept bln to. represent .a conceptua-.notion references in tile texts out CO whlcn no woras in the text contribute. The automobile in "Joanie parked." is an example. 12 could either be one that already exists in the workspace or a new one created by the expert at the time of its decision. After deciding on a concept, the principal role of a (content) word expert is to discriminate among the possibly many remaining senses of the word. Note that a good deal of this disambiguation may take place during the initial phase of concept determination. After asking enough questions to discover some piece of conceptual data, this data augments what already exists in the word's concept 5in, including declarative structures put there both by itself and by the other lexical participants in that concept. The parse completes when each word expert in the .workspace nas terminated. At this point, the concept ievez worKspace contains a complete conceptual interpretation ot the input text. Conceptual Case Resolution Adequate conceptual parsing of input text regulres a stage missing from this dlscusslon and constituting the current phase of research --- the attachment of each picture and setting concept (bin) to the appropriate conceptual case of an event concept. Such a mechanism can be viewed in an entirely analogous fashion to the mechanisms just described for performln 8 local disamblguation of word senses. Rather ~han word experts, however, the experts on this level are conceptual in nature. The concept level thus becomes the main level of activity and a new level, call it the schema level workspace, turns into the ma~n repository rot inferred Information. When a concept bin has closed, a concept expert is retrieved from a disk file, and initialized. If it is an event concept, its function is to fill its conceptual cases with settings and pictures; if it is a setting or picture, it must aetermlne its schematic role. The activity on this level, therefore, involves higher order processing than sense discrimination, but occurs in Just about the same way. The ambiguities involved in mapping known concepts into conceptual case schemata appear identical to those having to do with ma2ping words into concepts. Discovering that the word "pit maps in a certain context to the notion of a "fruit pit" requires the same abilities and knowledge as realizing that "the red house" maps in some context to the notion of "a ~ocation for smoking pot and listening to records". The implementation of the mechanisms to carry out this next level of inferential disambiguation has already begun. It should be quite clear that this schematic level is by no means the end of the line -- active expert-baseo p~ot following and general text understanding flt nicely Int? the word expert framework and constitute its loglca~ extension. 4. Summary and Conclusions The Word Expert Parser is a theory of o rganization and cgntro ~ for a conceptual, lansuage an@.~yzer. Th~ contro~ envlrosment ts cnaracter~zeo ny a co£~ectlon ot generator-like coroutines, called word experts, which cooperatively arrive at a conceptual interpretation of an ~nput sentence. Many torms of linguistic ann non-lln~uistlc knowledge are available to these experts In performing their task, including control state Knowledge and knowledge of the world, and by eliminating all but the mpst persistent forms of ambiguity, the parser models numan processing. This new model of parsin£ claims a number of theoretical advantages: (I) Its representations of linguistic knowledge reflect the enormous redundancy in natural languages -- without this redundancy in the model, the inter-expert handshaking (seen in many..forms in the example parse) would not be possible. ~z) ~ne model suggests some interesting approaches to language acquisition. Since much of a word expert's knowledge Is encoded in a branching discrimination structure,, addlng new information about a word involves the addition oz a new branch. This branch would be placed in the expert at the point where the contextual clues for dlsambiguatlng the new usage differ from those present for a known usage. (3) Idiosyncratic uses of langua8@ are easily e ncooea, s~nce the wore expert provides a c~esr way to no so. These uses are indistinguishable from other uses in their encodings in the model. (4) The parser represents a cognltively plausible model or se~uentlal coroutine-like processing in human ~anguage understanding. The organization of linguistic knowledge around the word, rather than the rewrite rule, motivates interesting conjectures about the flow of control In a human language understander. ACKNOWLEDGEMENTS I would llke to thank Chuck Rieger for his Insights, encouragement, and general manner. Many of the ideas presented here Chuck has graciously allowed me to steal. In addition, I thank the following people for helpin 8 me with this work through their comments and suggestions: Phil Agre, Milt Crlnberg, Phll London, Jim Reggla, Renan Samet, Randy Trigg, Rich Wood, and Pamela lave. REFERENCES ~ I] gleger, C. and S. Small, Word .Expert Parsing, roceedlngs ot the 6th International Jolnt Conzerence on Artificial Intelligence, 1979. ~] Riesbeck, C., Computational Understanding: Analysis Sentences and Context, AI-Memo 238, Stanford University, 1974. 431 Riesbeck, C. and R. Schank, Comprehension by omputer: Expectation-based Analysis of Sentences in Context, Research Report 78, Yale University, 1976. [4] Schank, R., Conceptual Dependency: A Theory of Natural Language Understanding, Cognitive Psychology, vol. 3, no. 4, 1972. 5] Wllks, Y. Making Preferences More Active, Artificial ntelli~ence, vol. II, no. 3, 1978. [6] Marcus, M.,Capturlng Linguistic C~reralizatione in a Parser for Ensllah x Prqceedings of the _2nd Nat$onal ~onterence ot tne ~anaalan ~oclety rot ~omputatlonai Studies of Intelligence, 1978. [7] Ringer, C., "the Importance of Multiple Choice, Proceedings of the 2nd Conference on Theoretical Issues in Natural Language Processing, 1978. ~ 8] Rieger, C., Viewing Parsing as Word Sense iscrimination, A Survey of Linguistic Science, Dingwall (ed.), Greylock F'~b.,~.TT- ~ 9] Rieger~ C., .Five Aspects. of a Full Scale Story omprenens~on ~oaei, Assoc~atlve Networks -- The Representation and Use oz Knowledge in U ~ s , Find~ ~eo.), academ~c-'FTe~r~,'r~79. [I0] Rieger, C., An Organization of Knowledge for Problem Solving and Language Comprehension, Artificial Intelligence, vol. 7, no. 2, 1976. ~ 11] Small. S., Conceptual Language Analysis. for Story omprehenalon. Technica~ ~eport 663, Unlversity ot Maryland, 1978. [12] Small, S., Word Experts for Conceptual Language Analysis, Ph.D. Thesis (forthcoming), University of Maryland, 1980. [13] McDermott, D. and G. Sussman, The Conniver Reference Manual. AI-Memo 259a, Massachusetts Institute of Technology, 1974. [14] Lisp Machine Group, LISP Machine Progress Report, Al-Memo 444, Massachusetts Institute of Technology, 1977. [15] Woods, W., Transition Network Grammars for Natural Language Analysis, Communications of the ACM, vol. 13, no. 10, 1970. ~ 16] Erman, L. and V. Lesser, A Multi-Level Organization or Problem Solving using Many, Diverse, Cooperating Sources of Knowledge, Proceedings of the 4th International Joint Conference on Artificial Intelligence, 1975. ~ 17] Reggia, J., Representing and Using Medical Knowledge or the Neuro¢ogical Localization Problem (First Report of the NE,UREX Project), Tecnnical Report 695, University of Harylana, 1978. Mll8] Sussman, G.,. T. Winograd, and E. Charuiak, c ro-Planner Reference Manual, AI-Memo 205a, Massachusetts Institute of Technology, 1971. ~19] BobrowxD. and T. Wlnograd, An Overview of KRL, A nowledge ~.epresentation Eanguage, Cognitive Science, vol. 1, no. 1, 1977. ~ 20] London, P., Dependency Networks as a Representation or Modeling in General Problem Solvers, Technical Report 698, University of Maryland, 1978. 13
1979
3
Schank/Riesbeck vs. Norman/Rumelhart: What's the Difference? Marc Eisenstadt The Open University Milton Keynes, ENGLAND This paper explores the fundamental differences between two sentence-parsers developed in the early 1970's: Riesbeck's parser for $chank's'conceptual dependency' theory (4, 5), and the 'LNR' parser for Norman and Rumelhart's 'active :~emantic network' theory (3). The Riesbeck parser and the I,NR parser share a common goal - that of trsnsforming an input sentence into a canonical form for later use by memory~inference~paraphrase processes, l,'or both parserz, this transformation is the act of 'comprehension', although they appear to go about it in very (Jifferent ways. Are these differences real or apparent? Riesbeck's parser i~ implemented as n production system, in which input text can either ssti~{y the condition side of any production rule within ~ packet of currently-active rules, or else interrupt processing by disabling the current packet of rules and enabling ('triggering') a new packet of rules. In operation, the main verb of each segment of text is located, and a pointer to its lexical decomposition (canonical form) is established in memory. The surrounding text, primerily noun phrases, is then systematically mapped onto vacant case frame slots within the memory representation of the decomposed verb. Case information is signposted by a verb-triggered packet of production rules which expects certain cldsses of entity (e.g. animate recipient) to be encountered in the text. Phrase boundaries are handled by keyword-triggered packets of rules which initiate and terminate the parsing of phrases. In contrast to this, the LNR parser is implemented as an augmented transition network, in which input text can either satisfy a current expectation or cause back- tracking to a point at which an alternative expectation can be satisfied. In operation, input text is mapped onto a surface case frame, which is an n-ary predicate containing a pointer to the appropriate code responsible for decomposing the predicate into canonical form. Case information is signposted by property-list indicators stored in the lexical entry for verbs. These indicators act as signals or flags which are inspected by augmented tests on PUSH NP and PUSH PP arcs in order to decide whether such transitions are to be allowed. Phrase boundaries are handled by the standard ATN PUSH and POP mechanisms, with provision for backtracking if an initially-fulfilled expectation later turns out to have been incorrect. In order to determine which differences are due to notational conventions, I have implemented versions of both parsers in Kaplan's General Syntactic Processor (GSP) formalism (2)~ a simple but elegant generalization of ATNs. In GSP terms, Riesbeck's active packets of production rules are grammar states, and each rule is represented as a grammar arc. Rule-packet triggering is handled by storing in the lexicon the GSP code which transfers control to a new grammar state when an interrupt is called for. Each packet is in effect a sub-grammar of the type handled normally by an ATN PUSH and POP. The important difference is that the expensive actions normally associated with PUSH and POP (e.g. saving registers, building structures) only occur after it is safe to perform them. That is, bottom-up interrupts and very cheap 'lookahead' ensure that waste- ful backtracking is largely avoided. Riesbeck's verb-triggered packet of rules (i.e. the entire sub-grammar which is entered after the verb is encountered) is isomorphic to the LNR-style use of lexical flags, which are in effect 'raised' and 'lowered' ~olely for the benefit of augmented tests on verb-independent ~rcs. Where Riesbeck depicts a 'satisfied expectation' by deleting the relevant production rule from the currently-active packet, LNR achieves the same effect by using augmented tests on PUSH NP and PUSII PP arcz to determine whether a particular case frame Slot has already been filled. Both approaches are handled with equal ease by GSP. In actual practice, Riesbeck's case frame expectations are typically tests for simple selectional restrictions, whereas LNR's case frame expectations are typically tests for the order in which noun phrases are encounter- ed. Prepositions, naturally, are used by both parsers as important case frame clues: Riesbeck has a verb- triggered action alter the interrupt code associated with prepositions so that they 'behave' in precisely the right way; this is isomorphic to LNR's flags which are stored in the lexical entry for a verb and examined by augmented tests on verb-independent prepositional phrase arcs in the grammar. The behaviour of Riesbeck's verb-triggered packets (verb-dependent sub-grammars) is actually independent of when a pointer to the lexical decomposition of the verb is established (i.e. whether a pointer is added as soon as the verb is encountered or whether it is added after the end of the sentence has been reached). Thus, any claims about the possible advantages of 'early' or 'instantaneous' decomposition are moot. Since Riesbeck's cases are filled primarily on the basis of fairly simple selectional restrictions, there is no obvious reason why his parser couldn't have built some other kind of internal representation, based on any one of several linguistic theories of lexical decomposition. Although Riesbeck's decomposition could occur after the entire sentence has been parsed, LNR's decomposition must occur at this point, because it uses a network- matching algorithm to find already-present structures in memory, and relies upon the arguments of the main n-ary predicate of the sentence being as fully specified as possible. Computationally, the major difference between the two parsers is that Riesbeck's parser uses interrupts to initiate 'safe' PUSHes and POPs to and from sub-gra,s,ars, whereas the L~R parser performs 'risky' PUSHes and POPs like any purely top-down parser. Riesbeck's mechanism is potentially very powerful, and the performance of the LNR parser can be improved by allowing this mechanism to be added automatically by the compiler which transforms an LNR augmented transition network into GSP ~chine code. Each parser can thus be mapped fairly clesJnly onto the other, with the only irreconcilable difference between them being the degree to which they rely on verb-dependent selectional restrictions to guide the process of filling in case frames. This character- ization of the differences between them, based on implementing them within a common GSP framework, is somewhat surprising, since (a) the differences have nothing to do with 'conceptual dependency' or 'active septic networks' s~ud (b) the computational difference between them immediately suggests a way to auton~tically incorporate bottom-up processing into the LNR parser to improve not only its efficiency, but also its psychological plausibility. A GSP implementation of a 'hybrid' version of the two parsers is outlined in (I). 15 REFERENCES (1) Eisenstadt, M. Alternative parsers fur conceptual dependency: getting there is half the fun° Proceedings of the sixth international ~oint conference on artifici%l intelli~ence, Tokyo, 1979. (2) Kaplan, R.M. A general syntactic processor. In R. Ruetin (Ed.) Natural language processing. Englewood Cliffs, N.J.: Prentice-Hal1, 1973. (3) (5) Norman, D.A., Rumelhart, D.E., and the LNR Research Group. Explorations in cognition. San Francisco: W.h. Freeman 1975. Riesbeck, C.K. Computational understanding: analysis of sentences and context. Working paper 4, Istituto per gli Studi Semantici e Cognitivi, Castab~nola, Switzerland, 1974. Schank, R.C. Conceptual dependency: a theory of natural language understanding. Co~rtitive Psychology, vol. 3, no. 4, 1972. 16
1979
4
TOWARD A COMPUTATIONAL THEORY OF SPEECH PERCEPTION Jonathan Allen Research Laboratory of Electronics & Dept. of Electrical Engineering and Computer Science Massachusetts Institute of Technology, Cambridge, MA 02139 ABSTRACT In recent years,a great deal of evidence has been collec- ted which gives substantially increased insight into the nature of human speech perception. It is the author's belief that such data can be effectively used to infer much of the structure of a practical speech recognition system. This paper details a new view of the role of structural constraints within the several structural do- mains (e.g. articulation, phonetics, phonology, syntax, semantics) that must be utilized to infer the desired percept. Each of the structural domains mentioned above has a sub- stantial "internal theory" describing the constraints within that domain, but there are also many interactions between structural domains which must be considered. Thus words llke "incline" and "survey" shift stress with syntactic role, and there is a pragmatic bias for the ambiguous sentence "John called the boy who has smashed his car up." to be interpreted under a strategy that reflects a tendency for local completion of syntactic structures. It is clear, then, that while analysis within a structural domain (e.g. syntactic parsing) can be performed up to a point,lnteraction with other domains and integration of constraint strengths across these domains is needed for correct perception. The various constraints have differing and changing strengths at different points in an utterance, so that no fixed metric can be used to determine their contribution to the well- formedness of the utterance. At the segmental level, many diverse cues for segmental features have been found. As many as 16 cues mark the voicing distinction, for example. We may think of each of these cues as also representing a constraint, and the strength of the constraint varies with the context. For example, stop closure duration must be interpreted in the context of the local rate of speech, and a given value of closure duration can signify either a voiced or an unvoiced stop depending on the surrounding vowel dura- tions. Thus several cues must be integrated to obtain the perceived segmental feature, and the weights assigned to each cue vary with the local context. From the preceding examples, it is seen that in order to model human speech perception, it is necessary to dyna- mically integrate a wide variety of constraints. The evidence argues strongly for an active focussed search, whereby the perceptual mechanism knows, as the utterance unfolds, where the strongest constraint strengths are, and uses this reliable information, while ignoring "cues" that are unreliable or non-determining in the immediate context. For example, shadowing experiments have shown that listeners (performing the shadowing task) can restore disrupted words to their original form by using semantic and syntactic context, thus demonstra- ting the integration process. Furthermore, techniques are now available for analytically finding that infor- matlon in an input stimulus which can maximally discri- mlnate between two candidate prototypes, so that the perceptual control structure can focus only on such information co make a choice between the candidates. In this paper, we develop a theory for speech recogni- tion which contains the required dynamic integration capability coupled with the ability to focus on a res- tricted set of cues which has been contextually selected. The model of speech recognition which we have developed requires, of course, an initial low-level analysis of the speech waveform to get started. We argue from the recent psychollnguistic literature that stressed syllables provide the required entry points. Stressed syllable peaks can be readily located, and use of the phonotactics of segmental distribution within syllables, together with the relatively clear articulation of syllable-initial consonants, allows us to formulate a robust procedure for determining initial segmental "islands", around which further analysis can proceed. In fact, there is evidence to indicate that the human lexicon is organized and accessed via these stressed syllables. The restriction of the original analysis to these stressed syllables can be regarded as another form of focussed search, which in turn leads to additional searches dictated by the relative constraint strengths of the various domains contributing to the percept. We argue that these views are not only consonant with the current knowledge of human speech perceptlon, but form the proper basis for the design of hlgh-performance Speech recognition systems. 17
1979
5
UNGRAMHATICALITY AND EXTRA-GRAMMATICALITY IN NATURAL LANGUAGE UNDERSTANDING SYSTEMS Stan C. Kwasny as The Ohio State University Columbus, Ohio 1. Introduction Among the components included in Natural Language Understanding (NLU) systems is a grammar which specifies much of the linguistic structure of the utterances that can be expected. However, it is certain that inputs that are ill-formed with respect to the grammar will be received, both because people regularly form ungra=cmatical utterances and because there are a variety of forms that cannot be readily included in current grammatical models and are hence "extra-grammatical". These might be rejected, but as Wilks stresses, "...understanding requires, at the very least, ... some attempt to interpret, rather than merely reject, what seem to be ill-formed utterances." [WIL76] This paper investigates several language phenomena commonly considered ungrammatical or extra-grammatical and proposes techniques directed at integrating them as much as possible into the conventional grammatical processing performed by NLU systems through Augmented Transition Network (ATN) grammars. For each NLU system, a "normative" grammar is assumed which specifies the structure of well-formed inputs. Rules that are both manually added to the original grammar or automatically constructed during parsing analyze the ill-formed input. The ill-formedness is shown at the completion of a parse by deviance from fully grammatical structures. We have been able to do this processing while preserving the structural characteristics of the original grammar and its inherent efficiency. Some of the phenomena discussed have been considered previously in particular NLU systems, see for example the ellipsis handling in LIFER [HEN??]. Some techniques similar to ours have been used for parsing, see for example the conjunction mechanism in LUNAR [WOO?3). On the linguistic side, Chomsky [CHO6q] and Katz [KAT6q], among others have considered the treatment of ungrammatlcality in Transformational Grammar theories. The study closest to ours is that of Weischedel and Black [WEI?9]. The present study is distinguished by the range of phenomena considered, its structural and efficiency goals, and the inclusion of the techniques proposed within one implementation. This paper looks at these problems, proposes mechanisms aimed at solving the problems, and describes how these mechanisms are used. At the end, some extensions are suggested. Unless otherwise noted, all ideas have been tested through implementation. A more detailed and extended discussion of all points may be found in Kwasny [KWA?9]. I I . Language Phenomena Success in handling ungrammatical and extra-grammatical input depends on two factors. The first is the identification of types of ill-formednese and the patterns they follow. The second is the relating of ill-formed input to the parsing path of a grammatical input the user intends. This section introduces the types of ill-formedness we have studied, ee Current Address: Computer Science Department Indiana University Bloomington, Indiana By Norman K. Sondheimer Sperry Univac Blue Bell, Pennsylvania and discusses their relationship structures in terms of ATN grammars. to grammatical II.I Co-Occurrence Violations Our first class of errors can be connected to co-occurrence restrictions within a sentence. There are many occassions in a sentence where two parts or more must agree (= indicates an ill-formed or ungrammatical sentence): =Draw a circles. "I will stay from now under midnight. The errors in the above involve coordination between the underlined words. The first example illustrates simple agreement problems. The second involves a complicated relation between at least the three underlined terms. Such phenomena do occur naturally. For example, Shore ($H077] analyzes fifty-six freshman English papers written by Black college students and reveals patterns of nonstandard usage ranging from uninflected plurals, possessives, and third person singulars to overinflection (use of inappropriate endings.) For co-occurrence violations, the blocks that keep inputs from being parsed as the user intended arise from a failure of a test on an arc or the failure to satisfy an arc type restriction, e.g., failure of a word to be in the correct category. The essential block in the first example would likely occur on an agreement test on an arc accepting a noun, The essential blockage in the second example is likely to come from failure of the arc testing the final preposition. 11.2 Ellipsis and Extraneous Terms In handling ellipsis, the most relevant distinction to make is between contextual and telegraphic ellipsis. Contextual ellipsis occurs when a form only makes proper sense in the context of other sentences. For example, the form ePresident Carter has. seems ungrammatical without the preceding question form Who has a daughter named Amy? President Carter has. Telegraphic ellipsis, on the other hand, occurs when a form only makes proper sense in a particular situation. For example, the tome 3 chairs no waiting (sign in barber shop) Yanks split (headline in sports section) Profit margins for each product (query submitted to a NLU system) 19 are cases of telegraphic ellipsis with the situation noted In parentheses. The final example Is from an experimental study of NLU for management information which indicated that such forms must be considered [MAL75]. Another type of unarammaticality related to ellipsis occurs when the user puts unnecessary words or phrases In an utterance. The reason for an extra word may be a change of intention In the middle of an utterance, an oversight, or simply for emphasis. For example, • Draw a llne with from here to there. "List prices of single unit prices for 72 and 73. The second example comes from Malhotra [MALT5]. The best way to see the errors In terms of the ATN is to think of the user as trylng to complete a path through the grammar, but having produced an input that has too many or too few forms necessary to traverse all arcs, II.3 Conjunction Conjunction is an extremely common phenomenon, but it is seldom directly treated in 8 grammar. We have considered several typos of conjunction. Simple forms of conjunction occur most frequently, as in John loves Mary and hates Sue. Gapping occurs when internal segments of the second conjunct are missina, as in John loves Mary and Wary John. The list form of conjunction occurs when more than two elements are joined in a single phrase, as in John loves Wary. Sue, Nancy. end Bill. Correlative conjunction occurs in sentences to coordinate the Joining of constituents, as in John both loves and hates Sue. The reason conJuncts are generally left out of grammars is that they can appear in so many places that inclusion would dramatically increase the size of the grammar. The same argument applies to the ungrammatical phenomena. Since they allow so much variation compared to grammatical forms, including them with existing techniques would dramatically increase the size oF a gram~aar. Further there is a real distinction in terms of completeness and clarity of intent between grammatical and ungrammatical forms. Hence we feel justified In suggesting speciai techniques for their treatment. III. Proposed Mechanisms and How They Apply The following presentation of our techniques assumes an understanding of the ATN model. The techniques are applied to the langumae phenomena discussed ~n the previous section. 20 III.l Relaxation Techniques The first two methods described are relaxation methods which allow the successful traversal of ATN arcs that miaht not otherwise be traversed. Durin8 parsina, whenever an arc cannot be taken, a check is made to see if some form of relaxation can apply. If it can. then a backtrack point is created which includes the relaxed version of the arc. These alternatives are not considered until after all possible 8rammatlcsl paths have been attempted thereby insurtn8 that 8rammaticel inputs are still handled correctly. Relaxation of previously relaxed arcs is also possible. Two methods of relaxation have been Investigated. Our first method involves relaxln8 a test on an arc, similar to the method used by Weisohedel in [WEI79]. Test relaxation occurs when the test portion of an arc contains a relaxable predicate and the test fails. Two methods of test relaxation have been identified and implemented based on predicate type. Predicates can be desianated by the grammar writer as either absolutely violable in which case the opposite value of the predicate (determined by the LISP function NOT applied to the predicate) Is substituted for the predicate during relaxation or conditionally violable in which case s substitute predicate is provided. For example, consider the following to be a test that fails: (AND (INFLECTING V) (INTRAN3 V)) If the predicate INFLECTING was declared absolutely violable and its use in this test returned the value NIL, then the negation of (INFLECTING Y) would replace It in the test creating a new arc with the test: (AND T (INTRANS V)) If INTRANS were conditionally violable with the substitute predicate TRANS, then the following test would appear on the new arc: (AND (INFLECTING V) (TRANS V)) Whenever more than one test in a failing arc is violable, all possible single relaxations are attempted independently. Absolutely violable predicates can be permitted in cases where the test describes some superficial consistency checking or where the test's failure or success doesn't have a direct affect on meaning, while conditionally violable predicates apply to predicates which must be relaxed cautiously or else loss of meaning may result. ChomsMy discusses the notion of organizing word categories hierarchically in developing his ideas on degrees of grammaticalness. We have applied and extended these ideas In our second method of relaxation called catesory relaxation. In this method, the 8rammar writer produces, along with the grammar, a hierarchy describing the relationship amen8 words, categories, and phrase types which is utilized by the relaxation mechanism to construct relaxed versions of arcs that hive failed. When an arc fails because of an arc type failure (i.e., because a particular word, category, or phrase was not found) a new arc (or arcs) may be created according to the description of the word, category, or phrase in the hierarchy. Typically. PUSH arcs will relax to PUSH arcs, CAT arcs to CAT or PUSH arcs, and WRD or HEM arcs to CAT arcs. Consider. for example, the syntactic cateaory hierarchy for pronouns shown in Figure 1. For this example, the cateaory relaxation mechanism would allow the relaxation of PERSONAL pronouns to include the category PRONOUN. The arc produced from category relaxation of PERSONAL pronouns also includes the subcategories REFLEXIVE and DEMONSTRATIVE in order to expand the scope of terms during relaxation. As with test relaxation, successive relaxations could occur. For both methods of relaxation, "deviance notes" are generated which describe the nature of the relaxation in each case. Where multiple types or multiple levels of relaxation occur, a note is generated for each of these. The entire list of deviance notes accompanies the final structure produced by the parser. In this way, the final structure is marked as deviant and the nature of the deviance is available for use by other components of the understanding system. In our implementation, test relaxation has been fully implemented, while category relaxation has been implemented for all cases except those involving PUSH arcs. Such an implementation is anticipated, but requires a modification to our backtracking algorithm. III.2 Co-Occurrence and Relaxation The solution being proposed to handled forms that are deviant because of co-occurrence violations centers around the use of relaxation methods. Where simple tests exist within a grammar to filter out unacceptable forms of the type noted above, these tests may be relaxed to allow the acceptance of these forms. This doesn't eliminate the need for such tests since these tests help in disambiguation and provide a means by which sentences are marked as having violated certain rules. For co-occurrence violations, the point in the grammar where parsing becomes blocked is often exactly where the test or category violation occurs. An arc at that point is being attempted and fails due to a failure of the co-occurrence test or categorization requirements. Relaxation is then applied and an alternative generated which may be explored at a later point via backtracking. For example, the sentence: WJohn love Mary shows a disagreement between the subject (John) and the verb (love). Most probably this would show up during parsing when an arc is attempted which is expecting the verb of the sentence. The test would fall and the traversal would not be allowed. At that point, an ungrammatical alternative is created for later backtracking to consider. III.) Patterns and the Pattern Arc In this section, relaxation techniques, as applied to the grammar itself, are introduced through the use of patterns and pattern-matching algorithms. Other systems have used patterns for parsing. We have devised a powerful method of integrating, within the ATN formalism, patterns which are flexible and useful. In our current formulation, which we have implemented and are now testing, a pattern is a linear sequence of ATN arcs which is matched against the input string. A pattern arc (PAT) has been added to the ATN formalism whose form is similar to that of other arcs: (PAT <pat apec> <test> <act> a <term>) The pattern specification (<pat spec>) is defined as: <pat spec> ::: (<patt> <mode> a) 21 <part> ::= (<p arc>*) <pat name> <mode> ::= UNANCHOR OPTIONAL SKIP <p arc> ::= <arc> > <arc> <pat name> ::= user-assiGned pattern name > The pattern (<part>) is either the name of a pattern, a ">", or a list of ATN arcs, each of which may be preceded by the symbol ">", while the pattern mode (<mode>) can be any of the keywords, UNANCHOR, OPTIONAL, or SKIP. These are discussed below. To refer to patterns by name, a dictionary of patterns is supported. A dictionary of arcs is also supported, allowing the referencing of arcs by name as well. Further, named arcs are defined as macros, allowing the dictionary and the grammar to be substantially reduced in size. THE PATTERN MATCHER Pattern matching proceeds by matching each arc in the pattern against the input string, but is affected by the chosen "mode" of matching. Since the individual component arcs are, in a sense, complex patterns, the ATN interpreter can be considered part of the matching algorithm as well. In ares within patterns, explicit transfer to a new state is ignored and the next arc attempted on success is the one following in the pattern. An are in a pattern prefaced by ">" can be considered optional, if the OPTIONAL mode has been selected to activate this feature. When this is done, the matching algorithm still attempts to match optional area, but may ignore them. A pattern unanchoring capability is activated by specifying the mode UNANCHOR. In this mode, patterns are permitted to skip words prior to matching. Finally, selection of the SKIP mode results in words being ignored between matches of the arcs within a pattern. This is a generalization of the UNANCHOR mode. Pattern matching again results in deviance notes. For patterns, they contain information necessary to determine how matching succeeded. SOURCE OF PATTERNS An automatic pattern generation mechanism has been implemented using the trace of the current execution path to produce a pattern. This is invoked by using a ">" as the pattern name. Patterns produced in this fashion contain only those arcs traversed at the current level of recursion in the network, although we are planning to implement a generalization o£ this in which PUSH arcs can be automatically replaced by their subnet~ork paths. Each are in an automatic pattern is marked as optional. Patterns can also be constructed dynamically in precisely the same way grammatical structures are built using BUILDQ. The vehicle by which this is accomplished is discussed next. AUTOMATIC PRODUCTION OF ARCS Pattern arcs enter the grammar in two ways. They are manually written into the grammar in those cases where the ungrammaticalities are common and they are added to the grammar automatically in those cases where the ungrammaticality is dependent on context. Pattern arcs produced dynamically enter the grammar through one of two devices. They may be constructed as needed by special macro arcs or they may be constructed for future use through an expectation mechanism. As the expectatlon-based parsing efforts clearly show, syntactic elements especially words contain important clues on processing. Indeed. we also have found It useful to make the ATN mechanism more "active" by allowing it to produce new arcs based on such clues. TO achieve this, the CAT, MEM, TBT, and WRD arcs have been generalized and four new "macro" arcs, known as CAT e. HEM e, TST a, and WRD e. have been added to the ATN formalism. These are similar In every way to their counterparts, except that as a final action, instead of indicating the state to which the traversal leads, a new arc is oonstructed dynamically and immediately executed. The difference in the form that the new arc takes is seen in the following pair where <crest act> Is used to define the dynamic arc: (CAT <cat> <test> <act> a <term >) (CAT e <cat> <test> <act> a <creat act>) Arcs computed by macro arcs can be of any type permitted by the ATN, but one of the most useful arcs to compute in this manner is the PAT arc discussed above. EXPECTATIONS The macro arc forces immediate execution of an arc. Arcs may also be computed and temporarily added to the grammar for later execution through an "expectation" mechanism. Expectations are performed as actions within arcs (analogous to the HOLD action for parsing structures) or as actions elsewhere In the MLU system (e.g., during generation when particular types of responses can be foreseen). Two forms are allowed: (EXPECT <crest act> <state>) (EXPECT <crest act> ) In the first case, the arc created is bound to a state as specified. When later processing leads to that state, the expected arc will be attempted as one alternative at that state. In the second case, where no state is specified, the effect is to attempt the arc at every state visited during the parse. The range of an expectation produced during parsing is ordinarily limited to a single sentence, with the arc disappearing after it has been used; however, the start state, S e, is reserved for expectations intended to be active at the beginning of the next sentence. These will disappear in turn at the end--~prooessing for that sentence. IIZ.q Patterns t Elllpsls~ and Extraneous Forms The Pattern arc is proposed as the primary mechanism for handling ellipsis and extraneous forms. A Pattern arc can be seen as capturing a single path through a netWOrk. The matcher gives some freedom In how that path relates to a string. We propose that the appropriate parsing path through a network relates to an elliptical sentence or one with extra words in the same way. With contextual ellipsis, the relationship will be in having some of the arcs on the correct path not satisfied. In Pattern arcs, these will be represented by arcs marked as optional. With contextual ellipsis, dialogue context will provide the defaults for the missing components. With Pattern arcs, the deviance notes will show what was left out and the other components in the ~U system will be responsible for supplying the values. The source of patterns for contextual ellipsis is important. In Lifer [HEN77], the previous user input can be seen as a pattern for elliptical processing of the current input. The automatic pattern generator developed here, along with the expectation mechanism, will capture this level of processing. But with the ability to construct arbitrary patterns and to add them to the grammar from other components of the MLU system, our approach can acccomplish much more. For example, a question generation routine could add an expectation of a yes/no answer in front of a transformed rephrasing of a question, as in Did Amy klas anyone? Yes, Jismy was kissed. Patterns for telegraphic ellipsis will have to be added to the grammar manually. Generally, patterns of usage must be identified, say in a study like that of Malhotra, so that appropriate patterns can be constructed. Patterns for extraneous forms will also be added In advance. These will either use the unachor option In order to skip false starts, or dynamically produced patterns to catch repetitions for emphasis. In general, only a limited number of these patterns should be required. The value of the pattern mechanism here, especially In the case of telegraphic ellipsis, will be in connecting the ungrammatical to grammatical forms. III.5 Conjunction and Macro Arcs Pattern arcs are also proposed as the primary mechanism for handling conjunction. The rationale for this is the often noted connection between conjunction and ellipsis, see for example Halltday and Haman [HAL75]. This is clear with gapping, as in the following where the parentheses show the missing component John loves Mary and Mary (loves) John. BUt it also can be seen with other forms, as in John loves Mary and (John) hates Sue. John loves Hary, (John loves) Sue, (John loves) Mancy, and (John loves) Bill. Whenever a conjunction is seen, a pattern is developed from the already identified elements and matched against the remaining segments of input. The heuristics for deciding from which level to produce the pattern force the most general interpretation in order to encourage an elliptical reading. All of the forms of conjunction described above are treated through a globally defined set of "conjunction arcs" (Some restricted cases, such as "and" following "between", have the conjunction built into the grammar). In general, this set will be made up of macro arcs which compute Pattern arcs. The automatic pattern mechanism is heavily used. With simple conjunctions, the rightmost elements in the patterns are matched. Internal elements In patterns are skipped with gapping. The llst form of conjunction can also be handled through the careful construction of dynamic patterns which are then expected at a later point. Correlatives are treated similarly, with expectations based on the dynamic building of patterns. There are a number of details in our proposal which will not be presented. There are also visible limits. it is instructive to compare the proposal to the SYSCONj facility of Woods [W0073]. It treats conjunction as 22 showing alternative ways of continuing a sentence. This allows for sentences such as He drove his car through and broke a plate glass window. which at best we will accept with a misleading deviance note. However, it can not handle the obvious elliptical cases, such gapping, or the tightly constrained cases, such as correlatives. We expect to continue investigating the pattern approach. III.6 Interaction of Techniques As grammatical processing proceeds, ungrammatical possibilities are continually being suggested from the various mechanisms we have implemented. To coordinate all of these activities, the backtracking mechanism has been improved to keep track of the:le alternatives. All paths in the original grammar are attempted first. Only when these all fail are the conjunction alternatives and the manually added and dynamically produced ungrammatical alternatives tried. All of the alternatives of these sorts connected with a single state can be thought of as a single possibility. A selection mechanism is used to determine which backtrack point among the many potential alternatives is worth exploring next. Currently, we use a method also used by Welschedel and Black [WEI79] of selecting the alternative with the longest path length. IV. Conclusion and Open Questions These results are significant, we believe, because they extend the state of the art in several ways. Most obvious are the following: The use of the category hierarchy to handle arc type failures; The use of the pattern mechanism to allow for contextual ellipsis and gapping; More generally, the use of patterns to allow for many sorts of ellipsis and conjunctions; and Finally, the orchestration of all of the techniques in one coherent system, where because all grammatical alternatives are tried first and no modifications are made to the original grammar, its inherent efficiency and structure are preserved. IV.1 Open Problems Various questions for further research have arisen during the course of this work. The most important of these are discussed here. Better control must be exercised over the selection of viable alternatives when ungrammatical possibilities are being attempted. The longest-path heuristic is somewhat weak. The process that decides this would need to take into consideration, among other things, whether to allow relaxation of a criteria applied to the subject or to the verb in a case where the subject and verb do not agree. The current path length heuristic would always relax the verb which is clearly not always correct. No consideration has been given to the possible connection of one error wlth another. In some cases, one error can lead to or affect another. Several other types of ill-formedness have not been considered in this study, for example, idioms, metaphors, incorrect word order, run together sentences, incorrect punctuation, misspelling, and presuppositional failure. Either little is known about these processes or they have been studied elsewhere independently. In either case, work remains to be done. V. Acknowledgments We wish to acknowledge the comments of Ralph Weischedel and Marc Fogel on previous drafts of this paper. Although we would like to blame them, any shortcomings are clearly our own fault. VI. Bibliography [CHO6q] [FOD64] [HAL76] (HEN77] [KAT643 [KWA793 [MAL75] [SHO77] [WEI79] [ WIL76 ] [wo0733 Chomsky, N., "Degrees of Grammaticalness," in [FOD6~], 38q-389. Fodor, J. A. and J. J. Katz, The Structure of Language: Readings in the Philosophy of Language, Prentice-Hall, Englewood Cliffs, New Jersey, 196q. Halliday, M.A.K. and R. Hasan, Cohesion in English, Longman, London, 1976. Hendrlx, G. G., "The LIFER Manual," Technical Note 138, Artificial Intelligence Center, Stanford Research Institute, Menlo Park, California, February, 1977. Katz, J.J., "Semi-Sentences," in [FOD64], qoo-q16. Kwasny, S., "Treatment of Ungrammatical and Extragrammatical Phenomena in Natural Language Understanding Systems," PhD dissertation (forthcoming), Ohio State University, 1979. Malhotra, A., "Design Criteria for a Knowledge-Based English Language System for Management: An Experimental Analysis," MAC TR-I~6, M.I.T., Cambridge, Ha, February, 1975. Shores, D.L., "Black English and Black Attitudes," in Papers in Language Variation. D.L. Shores and C. PT-Hines (Ed--~. ] ~e University of Alabama Press, University, Alabama, 1977. Weischedel, R. M., and J. Black, "Responding to Potentially Unparseable Sentences," manuscript, Department of Computer and Information Sciences, University of Delaware, Newark, Delaware, 1979. Wilka, Y., "Natural Language Understanding Systems Within the A.I. Paradigm: A Survey," American Journal of Computational Lin~uistlcs, ~h~-#-~ 1T 1976. Woods, W. A2 "An Experimental Parsing System for Transition Network Grammars," in Natural Language Processing, R. Muslin (Ed.), Algorithmlcs Press, 1973. PRONOUN REFLEXIVE /;o i he she ... yourself ... this that ... Figure 1. A Category Hierarchy 23
1979
6
GENF~ALIZED AUGMENTED TRANSITION NETWORK GRAMMARS FOR GENERATION FROM SD£%NTIC NETWORKS Stuart C. Shapiro Department of Computer Science, SUNY at Buffalo I. YNTRODUCTYON Augmented transition network (ATN) grammars have, since their development by Woods [ 7; ~, become the most used method of describing grammars for natural language understanding end question answering systems. The ad- vantages of the ATN notation have been su,naarized as "I) perspicuity, 2) generative power, 3) efficiency of representation, 4) the ability to capture linguistic regularities and generalities, and 5) efficiency of operation., [ I ,p.191 ]. The usual method of utilizing an ATN grammar in a natural language system is to provide an interpreter which can take any ATH graam~ar, a lexi- con, and a sentence as data and produce either a parse of a sentence or a message that the sentence does not conform to the granunar. A compiler has been written [2;3 ] which takes an ATH grammar as input and produces a specialized parser for that grammar, but in this paper we will presume that an Interpreter is being used. A particular ATN grammar may be viewed as a program written in the ATH language. The program takes a sen- tence, a linear sequence of symbols, as input, and pro- duces as output a parse which is usually a parse tree (often represented by a LISP S-expression) or some "k~ewledge reprssentatioc" such as a semantic network. The operation of the program depends on the interpreter being used and the particular program (grannar), as well as on the input (sentence) being processed. Several methods have been described for using ATN gram- mars for sentence generation. One method [1,p.235]is to replace the usual interpreter by a generation inter- preter which con take an ATN grammar written for pars- ing and use it to produce random sentences conforming to the grammar. This is useful for testing and debug- ging the granmmLr. Another method [5 ] uses a modified interpreter to generate sentences from a semantic net- work. In this method, an ATN register is initialized to hold a node of the semantic network and the input to the grammar is a linear string of symbols providing a pattern of the sentence to be generated. Another method [4 ] also generates sentences from a semantic network. In this method, input to the granmmr is the semantic network itself. That is, instead of successive words of a surface sentence or successive symbols of a linear sentence pattern being scanned as the ATM grammar is traversed by the interpreter, different nodes of the ssmantic network are scanned. The gramnar controls the syntax of the generated sentence based on the structural properties of the semantic network and the information contained therein. It was intended that a single ATN interpreter could be used both for standard ATN parsing and for generation based on this last method. However, a special inter- preter was written for generation grammars of the type described in [4 ], and, indeed, the definition of the ATN formalism given in that paper, though based on the standard ATN formalism, was inconsistent enough with the standard notation that a single interpreter could not be used. This paper reports the results of work carried out to remo~ those inconsistencies. A generalization of the ATN formalism has been derived which allows a single interpreter to be used for both parsing and gen- erating gras~re. In fact, parsing and generating grammars can be sub-networks of each other. For example an A~M grammar can be constructed so that the ,,parse,, This material is based on work supported in part by the MaticeuLl Science Foundation under Grant #MCS78-O2274. of a natural language question is the natural language statement which answers it, interaction with representa- tion end inference routines beinR done on arcs along the way. The neW formalism is a strict generalization in the sense that it interprets all old ATN gralnars as having the same semantics (carrying out the same actions and producing the same parses) as before. 2. Gm~ERATION FROM A S~2~ANTIC NETWGRK--BRIEF OV~VIEg In our view, each node of a semantic network represeats a concept. The goal of the generator is, given a node, to express the concept represented by that node in a natural language surface string. The syntactic cate- gory of the surface string is determined by the grammar, which can include tests of the stracture of the semantic network connected to the node. In order to express the concept, it is often necessary to in- clude in the string substrings which express the con- cepts represented by adjacent nodes. For example, if a node represents a fact to he expressed as a state- ment, part of the statement may he a noun phrase expressing the concept represented by the node con- nected to the original node by an AGENT case arc. This can be done by a recursive call to a section of the grammar in charge of building noun phrases. This section will be passed the adjacent node. When it finishes, the original statement section of the grammar will continue adding additional substrings to the growing statement. In ATN grmrs written for parsing, a recurstve push does not change the input symbol being examined, but when the original level continues, parsing continues at a different symbol. In the generation approach we use, a recursive push often involves a change in the senantic node being examined, and the original level continues with the original node. This difference is a major motivation of some of the generalizations to the ATN formalism discussed below. ~ne other major motivation is that, in parsing a string of symbols, the .,next.. symbol is well defined, but in ,.parsing. a network, .next" mast be explicitly specified. 3. THE GEN~IALIZATION The following sub-sections shoW the generalized syn- tax of the ATN formalism, and assume a knowledge of the standard formalimm ([I ] is an excellent introduction). Syntactic structures already familiar to ATH users, but not discussed here remain unchanged. Parentheses and terms in upper case letters are terminal symbols. Lower case terms in angle brackets are non-terminals. Ternm enclosed in square brackets are optional. Terms followed by .*, m~ occur zero or more times in suc- cession. To avoid confusion, in the re, sAnder of this section we will underline the name of the * register. 3.1 TERMINAL ACTIONS Successful traversal of an ATN arc might or might not consume an input symbol. When parsing, such consump- ticn normally occurs, when ge~erating it normally does not, but if it does, the next symbol (semantic node) must be specified. To allow for these choices, we have returned to the technique of [6 ] of having two terminal action, TO and J~P, and have added an optional second argent to TO. The syntax is: (TO <stats> [~for~]) (JUMP <state>) 25 Both cause the parser to enter the given state . JUMP never conswms the input symbol; TO always does. It the <forw~ is absent in tbe TO action, the nex~ symbol to be scanned will be the next one in the input buffer. If <form is present, its value will be the next symbol to be scanned. All traditional ATN arcs ex- cept JU~ and POP end with a terminal action. The explanation given for the replacement of the JUMP terminal action by the Ob~ are ~ac that, ,since POP, PUSH and VTR ares never advance the input, to decide whether or not an arc advanced the input required k~o~- ledge of both the arc type and termination action. The introduction o£ the JUMP arc ... means that the input edvancement is a funetinn of the arc type alone." [2] That our reintroduction of the JUMP ter~L~tl action does not bring back the con/~ion is explained below in ~tion h. 3.2 APeS We retain a JU~ arc a8 veil as a JU~ temlnal action. The JUMP arc provides a place to make an arbitrary test and par'form sow actions without consuming an input symbol. We need such an are that does conmmm its in- put s~bol, but TST is not adequate since it, ~ CAT, is really a bundle of ares, one for each lexloal entry of the scarmed symbol, should the letter be lexlcall7 ambiguous. A semntle node, however, does not have a lexlcal entry. We therefore introduce a TO eros (TO (<state> [<~em ]) <test> <aetion~) It < test> is successful, the <aotion>s are performed and transfer is made to <state>. The input s~ubol is con~. The next symbol to be scanned is the value OF <form> if it is present or the next symbol in the input buffer if ~fer~ is ~Losing. The PUSH arc mBk~8 two asnn~lo~ms 1 ) the first symbol to be scud in ths ~zheetvoz4c is the cmTent contents of the * registers 2) the cuzTent input symbol will be consuned~oy the subnet~ork, so the content8 of can be replaced by the value returned by the subnet- ~ork. We need an are that causes a ~ i v e call to su~aetwork, but makes neither of ~heea two assmnp- tions, so we introduce the CALL arc: (CALL <state> ~fom ~es~> <preaction or ac~ion~ <rcgieter> <action>* <terminal action~ ) where <preaction or action> is <preaetice~ or <aotloa~>. Lf the <test> is successful, all the <action~e of < preactlon or action> are performed and a zqenwslve is made to the state <state> whore the next s~mbol to be scanned is the value of <fo~ and registers are initialized by the <prenc~Ion>s. Y.f the subnetwerk succeeds, its value is placed into <rsglstar> and the <action,s and <terminal action> are performed. Just as the normal TO terminal action is the general- Ised TO terminal action with. a default foru, the PUSH arc (which we retain) is the CALL arc with the folloe- ing defanltss <form> is e! the <preactlon or aotlon~s are only <prcaotion>e! <~gister> is _~. The on~ fm~ which must be added is (OETA <arc> (<node tom>]) " m <node fern is a form which evaluates to a seman- tic node. Y~ abeant, <node fozs~ defaults to ~. The value of OETA i8 tha node at the end c~ the ar~ label- led <arc> fm the spaoified node, or a IAst of such nodes L~ there are more than rose. 3.2 TESTS, PREACTION, ETC. The generalization o£ the ATN formalism to one which allN for writing gre~rs which generate s~'Tace strings from semantic networks, yet csn be interpret- ed bY the same interpreter whAch handles parsing grsm~8, requires no changes other t~an the ones des- eribed above. Of course, each t~plementation of an ATN interpreter contains slight di~erences in the set of tests and actions implemented beyond the basic ones. h. M INPUT Bb~ee~ Zr~ut to the ATN parser can be thought of as being the contents o£ a stack, called the input buffer. Zf the input is a string of' words, the ~ ~--'-~vill be at the top of the input buffer and successive words will be in successively deeper positions of the input buffer. ZF the input is a graph, the input buffer might controLs only a single node OF the graph. Ca antes-Lug an arc, the • register is set to the top element of the input buffer, uhlch must not be empty. The on~ exceptions to this are the VTR and POP arcs. VIR sets e to an element of the HOLD register. POP leaves .M, undefined since e is always the element to be accounted for by the current arc, and a POP arc is not trying to account for ar~ elmmut. ~he input buffer is not changed between the time a PUSH 8re is entered and t~ fine an arc emanating from the stata pushed to is antoM) 8o the contents of e on the latter ar~ will be the same as on the former. A CALL arc is allmred to opeei~ the centante of. on the arcs of the called s1~ta. This is accueplished by replacing the top element of the input buffer by that value before trans- fer to the called state. Y~ the value is a list of olemnto) we push each elmwnt individual~ onto the input buffer. ~ makes it particularly easy to loop thz~ a set of nodes, each of which uili contribute the sane syntactic tom to the growing santenee (nob as a st~A~g o£ adJectlves). on an arc (except for POP), i.e. during evaluation OF the test and the acts, the onntents OF ~ and the top elanent of the input buffer are the same. This re- quires spaeial pz~eessing for V~R, P~H, and CALL ares. Atter setting % a VIR are pushes the contents of ~ on- to tbe input buffer. When a PUSH are resuaes, and the lower level has sueceestu~ returned a value, the value is placed into * and also pushed onto the input buffer. ~an a CALL resumes, and the Immr level has 8uceassfUlly returned a value, the value is placed into the spueified register, and the centers of ~ is pushed onto the input butter. The s1~eitied register might or might not be e. In either case the contents of. e and the top OF the input buffer a~ the sane. There are two possible terminal acts, JUMP and TO. JUMP does not affect the input buffer, so the contents OF e will be same on the successor ares (except for POP and VIR) as at the end OF the curreut arc. TO pops the input buffer, but if provided with an optional tom, also pushes the value of ~Jmt form on~o the input but- ler. POPping from ~e top level is one7 legal if the input buffer is empty. POPPint fz~m any level should that a constituent has been accounted for. Accounting for a constituent should en~l removing it from the in1~t buffer. From this we conclude that ever~ path within a level fm an initial state to a POP ere oon1'~Lin at least one TO transfer, and in most cases, it is proper to trausfer TO ra~her than to JUMP to a state that hss a POP are emanat~ from it. TO will be terulnal ast for most V~R and PUSH a~s. 26 In an~ ATN interpreter which abides by this discussion, advancement of the input is a function of the terminal action alone in the sense that at any state JUMPed to, the top of the input buffer will be the last value of *, and at any state Jumped TO it will not be. Parsing and generating require a lexicon -- a file of words giving syntactic categories, features and inflec- tional forms ~or irregularly inflected words. Parsing and generating require different information, yet we wish to avoid duplication as much as possible. During parsing, morphological analysis is performed. The analyzer is given an inflected form, must segment it, find the stem in the lexicon and modify the lexical entry of the stem according to its analysis of the original form. Irregularly inflected forms must have their own entries in the lexicon. An entry in the lex- icon may be lexically ambiguous, so each entry must be associated with a list of one or more lexical feature lists. Each such list, whether stored in the lexicon or constructed by the morphological analyzer, must in- clude a syntactic category and a stem, which serves as a link to the semantic network, as well as other fea- tures such as transitivity for a verb. In the semantic network, sc~e nodes are associated with lexical entries. During generation, these entries, along with other information from the semantic network, are used by a morphological synthesizer to construct an inflected word. We assume that all such entries are unambiguous stems, and so contain only a single lexical feature list. This feature list must contain any ir- regularly inflected forms. In summary, a single lexicon may be used for both parsing and generating under the following conditions. An unambiguous stem can be used for both parsing and generating if its one lexlcal feature list contains features required for both operations. An ambiguous lexical entry will only be used during parsing. Each of its lexlcal feature lists ,met contain a unique but arbitrary ,stem,' for connection to the semantic net- work and for holding the lexical information required for generation. Every lexical feature list used for generating must contain the proper natural language spe!1~ng of its stem as well as any irregularly in- flected forms. Lexical entries for irregularly in- flected forms will only be used during parsing. For the purposes of this paper, it should be irrelevant whether the "stems,, connected to the semantic network are actual surface words llke "give,,, deeper sememes such as that underlying both ,,give, and ,,take", or primitives such as .ATRANS". 6. EXAMPLE Figure I shOWs an example interaction using the SNePS Semantic Network Processing ~ystem [5] in which I/O is controlled by a parsing-generating ATN grammar. Lines begun by "**" are user's input, which are all calls to the function named ,, : ". This function passes its argument llst as the input buffer for a parse to begin in state S. The form popped by the top level ATN ned- worm is then printed, folluwed by the CPU time in milliseconds. (The system is partly c~lled, partly interpreted LISP on a CYB~ 173. The ATN gra,mer is interpreted. ) Figure 2 shores the grammar in abbrevi- ated graphical form, and Figure 4 gives the details of each arc. The parsing network, beginning at state S~ is included for completeness, but the reader unfamiliar with SMePSUL, the S~ePS User Language, [5] is not ex- pected to understand its details. The first arc in the network is a PUSH to the parsing network. This network determines whether the inlmat is a statement (type D) or a question (type Q). If a statement, the network builds a SNAPS network repre- senting the information contained in the sentence and pops a semantic node representing the fact con- rained in the main clause. If the input is a question the parsing network calls the SNePS deduction routines (DEDUCE) to find the answer, and pops the semantic node representing that (no actual deduction is re- quired in this example). Figure 3 shews the complete SNePS network built during this example. Nodes MTh- M85 were built by the first statement,nodes M89 and MgOby the second. When the state RESPOND is reached, the input buffer contains the SNAPS node popped by the parsing network. The generating network then builds a sentence. The first two sentences were generated from node M85 before M89 end MgO were built. The third sentence was gener- ated from MgO, and the fourth from M85 again. Since the voice (VC) register is LIFTRed from the parsing network, the generated sentence has the same voice as the input sentence (see Figure I). Of particular note is the sub-network at state PRED which analyzes the proper tense for the generated sentence. For brevity, only simple tenses are included here, but the more complicated tenses presented in [4] can be handled in a similar manner. Also of interest is the subnetwork at state ADJS which generates a string of adjectives which are not already scheduled to be in the sentence. (Compare the third and fourth generated sentences of Figure 1.) 7. CONCLUSIONS A generalization of the ATN formalism has been pre- sented which allows grammars to be written for gener- ating surface sentences from semantic networks. The generalization has involved: adding an optional argument to the TO terminal act; reintroducing the JUMP terminal act; introducing a TO arc similar to the JUMP arc; introducing a CALL arc which is a generaliza- tion of the PUSH arc; introducing a GETA form; clari- fying the management of the input buffer. The benefits of these few changes are that parsing and generating gramnars may be written in the same familiar notation, may be interpreted (or compiled) by a single program, and may use each other in the same parser-generator network grammar. R~ENCES [1] Bates, Nadeleine. The theory and practice of aug- mented transition network grammars. In L. Bloc, ed. Natural Language Communication with Ccm~uters, Springev- ~'erlag, Berlin, 197U, 192-259. [2] Burton, R.R. Semantic grammar, an engineering technique for constructing natural language understand- ing systems. BBN Report No. 3h53, Bolt Beranek and Newman, Inc., Cambridge, MA., December 1976. [3] Burton, Richard R. and Woods, ~. A. A compiling system for augmented transition networks. Prtprints of COLING 76z The Lnternational Conference on Computation- al Linguistics, Ottawa, June 1976. [4] Shapiro, Stuart C. Generation as parsing from a network into a linear string. AJCL Microfiche 33 (1975) ~5-62. [5] Shapiro, Stuart C. The SNoPS semantic network processing system. In N.Y. Findler, ed., Associative Networks: Representation and Use of KnowledKe by Com- puters, Academic Press, New York, I~79, 17~-203. [6] ~1~ew, R. and Slocum, J. Generating e~gllsh discot~'se from e~tic networks. CACN ~, 10 (October 1972), 8~-905. 27 [7] Woods, W.A. Transition natwcrk ~smuars for ~.~(z A DOG KISSED YOUNG LUCY) natural langua@s ana~TSlSo CACM I~, 10 (October 1970), (I UND~STAND THAT A DOG KISSED YOUNG LUCY) 591 ...606. 3769 MSECS [8] Woods, W.A. An experimental parsing system for #~(, WHO KISS~ LUCY) transition network Rrsmmaz~. In Ro Rns~Ln, ed., Nat- (A DOG KIS3~ YOUNG LUCY) u~al LanRua~e P,-ocessin~. Algorlthmlcs Press, Mew~o~, 2714 MSEC3 1973, 111-15~. ~(, LUCY IS SWEET) (I ~D~L~TAND THAT YOUNG LUCT IS SWEET) 2127 MSECS #,~( z WHO WAS KISSED ~ A DOG) (SWEET YOUNG LUCY WAS KISSED BY A raG) 3OOh MSZCS Figure I. Example Interaction ~SH SP J ~ CALLNQ~3R J )(~ CALL NP J ~) CALLPRED J~.~ ADJS J CALL NP TO CALL PAST TO CAT V TO ~ ~ ..... ~ _ J~ ~WRD BY TO PUSH gNP CAT ADJ TO ~ Figure 2. A ?arsL~-(~nerating Grammar Terminal acta are tnd:Lcated by "J" or "TO" Figure 3. Samnt, ic Hetwoz.tc Build by ~ent, encea of Figure 1 28 (S (PUSH SP T (JUMP RESPOND))) (RESPO~ (JeW G} (Z~ (OKrR TrPZ) 'D) (SKrR ST~INO '(I UtmmSTAND THAT))) (av~ G} (za (G~.'m ~PZ) ,~))) (O (JUMP ~ (AND (GE~A OBJECT) (OVERLAP (GETR VC) 'PASS)) (SErR ~ (O~A OBJECT))) (JUMP @$ (AND (O~A AGENT) (DISJOINT (OK"HI VC) ,PASS)) (SErR SUBJ (OK"rA AO~T)) (SErR VC 'ACT)) (~ ~ (OK'PA WHICH) (SEI'R 5~IBJ (GErA WHICH)) (SETR VC 'ACT))) (os (cALL NUmR SUSa T NUmR (szm m~z .) (JUMP ore))) (081 (CaLL NP SUBJ T (S~Im DONE) (SENDR NUMBR) Rm (ADDR STRING REO) (JUMP SgB))) (SVB (CALL PRED * T (S~DR NUMBR) (S~#ER VC) (SENIR VB (OR (OKRA LEX (GETA VERB)) 'BE)) REG (AIER STRING PEG) (Ju~ smo~a))) (SUROBJ (CALL NP (OKRA AGENT) (AND GETA AGO'r) (OVERLAP VC 'PASS)) (SENDR DONE) * (ADDR STRING 'BY *) (TO ~D)) (CALL NP (OKRA OBJECT) (AnD (OKRA OBJECT) (OVmLAP VO 'ACT)) (S~Xm DONE) * (ADIR Sm~O *) (TO ram)) (CaLL NP (GETA ADJ) (OEPA ADJ) * (ADDR STRING *) (TO ~D)) (TO (roD) T)) (z~ (POP smiNo T)) (NUMBR (TO (NUMBRI) (OR (OETA SUB-) (OKRA SUP-) (OKRA CLASS-)) (SKTR NUM~ 'FL)) (TO (NLR~RI) (NOT (OR (GE~A SUB-) (OKRA SUP-) (OKRA CLASS-))) (SETR NUMBR 'SING}))) (NU~RI (POP NUMSR T)) (PRED (CALL PAST (OKRA E'f~) T T~SE (TO O~VB)) (CALL ~ (OKRA 5"r~) T TENSE (TO GE~qVB)) (TO (G~-NVB) T (SKRR TENSE 'PRES))) (G}~ (IOP (V~{BIZE (G}EI~ NUMBR) (G}E~I~ TENSE) (GEI~ VC) (G}m VB)) T)) (PAST (TO (PASTEND) (OVmLAP * *NOW)) (TO (PAST (G}ETA BEFORE)) T)) (PASTmD (POP 'PAST T)) (FUTR (TO (ZUTRZ~) (ovmLAp. ~ow)) (TO (rUT~ (GETA Arrm)) T)) ( ~ (POP ' ~ T)) (NP (TO (roD) (G}KRA LEX) (SE%~ STRING} (WHDIZE (G}ETR ~rb'Fd~R) (G}KRA IF, I[)))) (at.e N~A (~ (OKRA NANED-) (~ZSJOI~T (OKRA N~d~)~X~aZ))) (JUMP NPMA (AND (OKRA MEMBER-) (DISJOINT (OKRA MEMBER-) DONE)))) (trP~A (CALl. ADJS (OKRA WHICH-) (G}KrA WHICH-) (SE~ DONE) RZO (ADIR ETRINO Rm) (JUMP ~N)) (JUMP ~P~ T)) (~ (TO ~m) (~. STRI.G} (VaCaTE (G}KRR ~m'~) (OKRA ;2X (OZ~A rt~MZ (OKRA ~))))))) (~Pm (CALL A~S (OZn WHICH-) (OnA WHZC.-) (S~DS m~Z) Rm (aam s'miNo 'A zm) (JUMP ~)) (~ ~ T (ADDR STRING} 'A))) (NPM (CALL NP (GETA CLASS (OKRA M~SER-)) T (S~T~R DONE) REG (AD~R STRING} REG) (TO roD))) (ADJS (CALL NP (GETA ADJ) (DISJOINT * DONE) (S~DR DONE) * (ADDR STRING *) (TO ADJS)) (TO (A~JS) T) (raP STRING T)) (sP (w~ WHO T (SKrR TYPE 'Q) (LIFTS TYPE) (szm sVSa ~X (To v)) (maSH NPP T (sz~mR net ,D) (SETR n'PZ 'D) (Un~ n~Z) (sz'm susa .) (To v))) (v (CaT v T (szm vs (FmmREurm LZX (+(OKrR *)))) (SKrR TNS (OKrZ Z~SZ)) (W COMPL))) (C(~L (CAT V (AND (GETF PPRT) (OVmLAP (GETR VB) (GETA I~X- 'BE))) (SKTR OBJ (OKTR SUBJ)) (SETR SIBJ NIL) (SKrR VC 'PASS) (szm ~ (FINmPaUZU~ ~ (~(ozm .)))) (To sv)) (CaT ADJ (OVERlaP (ore VB) (OETA LEX- 'BE)) (SKrR ADJ (FINDORBUILD LEX (~(GETR *)))) (TO SVO)) (JUMP SV T)) (SV (JUMP 0 (EQ (OETR TNS) 'FRES) (SErR STM (BUILD BEI~ORE *NOW (BUILD AFTra *NOW) - ETM))) (ame o (zQ (GZ'm T.S) 'PAS'r) (SZ~ STM (BUrLD Sm'ORZ (B,ZLD sm~oaz .Now) - KrM)))) (0 (WRD BY (EQ (O~ VC) 'PASS) (TO PAO)) (~SH ~P r (sm~'m n, Pz) (szm oBJ .) (LZ~ VC) (TO SVO))) (PAO (PUS~ NPP T (S~]~R TYPE) (SETR SUBJ *) (LIFTR VC) (TO SVO))) (~ (raP (BU~.n AG~ (÷(OETR ~J)) VERB (+(OE'I~R ~))OBJECT (~(Gm OBJ))ST~2{E.'(f(OETR S'rM)) ~ *~TH) (zQ (ozm T~PZ 'D)) (rap (~AL (BU~ (•mmcz AOZtrr + v~ + OSJmT +) s~mJ w o~)) (zQ (ozm TrPz) ,Q))) (SVC (POp (EVAL (BIHIIX~ (FINDORBUILD WHICH + AIIJ +) SUBJ ADJ)) (~ (GKTR T3[PE) 'D)) (POP (EVAL (B~ (DEDUCE WHICH + ADJ +) S~J ~)) (EQ (OEI'R TYPE) 'Q))) (~ (~n~ A T (sm~ ~ T) (To ~PDKr)) (~ NPDET T)) (~nZT (CA~ Am T (HOLD (P~m,SU~ ~X (,(ozm .)))) (m ~)) (CAT N (AND (GETR INDEF) (EQ (OE'i~ TYPE) 'D)) (sin ~ (BOND Mmsm- (~u'~ c~ass (ziNmPa~LD ~x (*(oz'm .)))))) (TO re,A)) (CAT N (AND (OETR ]~qDEF) (EQ (OETR TI'PE) 'Q)) (SKrR ~ (FIND M~B~R- (DEDUCE M~ER %Y CLASS (TBUILD LEX (+(OKTR *)))))) (TO ICPA)) (CAT NPR T (SETR NH (FINDORBUILD NAMED- (FINDORBUILD NAME (F~UILD LEX (+(GETR *)))))) (TO ~Z))) (~A Orm ~ T (~AL (B~r~ (FZ~rmREuI~m W~CH. Aa)J *) ~H)) (TO ~PA)) (POP ~ T)) Figure 4. Details of the Parser~2en~rator ~t~mork 29
1979
7
KNOI~LEDGE ORGANIZATION AND APPLICATION: BRIEF COMIIENTS ON PAPERS IN THE SESSION Aravind K. Joshi Department of Computer and Information Science The Moore School University of Pennsylvania, Philadelphia, PA 191O4 Comments: My brief comments on the papers in this session are based on the abstracts available to me and not on the complete papers. Hence, it is quite possible that some of the comments may turn out to be inappropriate or else they have already been taken care of in the full texts. In a couple of cases~ I had the benefit of reading some earlier longer related reports, which were very helpful. All the papers (except by Sangster) deal with either knowledge representation, particular types of knowledge to be represented, or how certain types of knowledge are to be used. Brackman describes a lattice-like structured inheritance network (KLONE) as a language for explicit representation of natural language conceptual information. Multiple descriptions can be represented. How does the facility differ from a similar one in KRL? Belief representations appear to be only implicit. Quantification is handled through a set of "structural descriptions." It is not clear how negation is handled. The main application is for the command and control of advanced graphics manioulators through natural language. Is there an implicit claim here that the KLONE representations are suitable for both natural language concepts as well as for those in the visual domain? Sowa also presents a network like representation (con- ceptual graphs). It is a representation that is apparently based on some ideas of Hintikka on incomplete but extensible models called surface models. Sowa also uses some ideas of graph grammars. It is not clear how multiple descriptions and beliefs can be represented in this framework. Perhaps the detailed paper will clarify some of these issues. This paper does not describe any application. Sangster's paper is not concerned, directly with knowledge representation. It is concerned with complete and partial matching procedures, especially for determining whether a particular instance satisfies the criteria for membership in a particular class. Matching procedures, especially partial matching procedures, are highly rele- vant to the use of any knowledge representation. Partial matching procedures have received considerable attention in the rule-based systems. This does not appear to be the case for other representations. Moore and Mann do not deal with knowledge representation per se, but rather with the generation of natural lang- uage texts from a given knowledge representation. They are more concerned with the problem of generating a text (which includes questions of ordering among sentences, their scopes, etc.) which satisfies a goal held by the system, describing a (cognitive) state of the reader. The need for resorting to multi-sentence structures arises from the fact that for achieving a desired state of the reader, a single sentence may not be adequate. ~cDonald's work on generation appears to be relevant, but it is not mentioned by the authors. Burnstein is primarily concerned with knowledge about (physical) objects and its role in the comprehension process. The interest here is the need for a particular type of knowledge rather than the representation scheme itself, which he takes to be that of Schank. Knowledge about objects, their normal uses, and the kinds of actions they are normally involved in is necessary for interjretation of sentences dealing with objects. In sentence (1) John opened the bottle and poured the wine, Burnstein's analysis indicates that the inference is dri- ven largely by our knowledge about open bottles. In this instance, this need not be the case. We have the same situation in John took the bottle out of the refrioerator and poured the--w-Tne. The inference here is dependent on knowing something about wine bottles and their normal uses; knowledge of the fact that the bottle was open is not necessary. Given the normal reading of (1), (l') John opened the bottle and ~ured the wine out of it will be judged as re~u'n-~an--t~-, be-Te't'~o'n'~f--redundant material in (l') gives (1). Deletion of redundant and recoverable material is a device that language exploits. The recoverability here, however, is dependent on the knowledge about the objects and their normal uses.lf a non-normal reading of (1) is intended (e.g., the wine bein 0 poured into the bottle) then (l") John opened the bottle and poured the wine into it is not felt redundant. This suggests that a prediction that a normal reading is intended can be made (not, of course, with complete certainty) by recognizing that we are dealing with reduced forms. (Of course, context can always override such a prediction.) Some further questions are: Knowledge about objects is essential for comprehension. The paper does not discuss, however, how this knowledge and its particular represen- tation helps in controlling the inferences in a uniform manner. Is there any relationship of this work to the common sense algorithms of Rieger? Lebowitz is also concerned with a particular type of knowledge rather than a representation scheme. Knowledge about the reader's purpose is essential for comprehension. The role played by the "interest" of the reader is also explored. The application is for the comprehension of newspaper stories. There is considerable work beyond the indicated references in the analysis of goal-directed discoursep but this has not been mentioned~ Finally, there are other issues which are important for knowledge representation but which have been either left out or only peripherally mentioned by some of the authors. Some of these are as follows. (i) A representation has to be adequate to support the desired inference. But this is not enough. It is also important to know how inferences are made (e.g., with what ease or difficulty). The interaction of the nature of a representation and the structure of the sentence or discourse will make certain inferences go through more easily than others. (ii) Knowledge has to be updated. Again the nature of the representation would make certain kinds of updates or modifications easy and others difficult. (iii) The previous issue also has a bearing on the relationship between knowledge representation and know- ledge acquisition. At some level, these two aspects have to be viewed together. 31
1979
8
Taxonomy, Descriptions, and Individuals in Natural Language Understanding Ronald J. Brachman Bolt Beralmek and Newman Inc. KLONE is a general-purpose language for representing conceptual information. Several of its pr~linent features -- semantically clean inheritance of structured descriptions, taxonomic classification of gpneric knowledge, intensional structures for functional roles (including the possibility of multiple fillers), and procedural attachment (with automatic invocation) make it particularly useful in computer-based natural language understanding. We have implemented a prototype natural language system that uses KLONE extensively in several facets of its operation. This paper describes the system and points out some of the benefits of using KLONE for representation in natural language processing. Our system is the beneficiary of two kinds of advantage from KLONE. First, the taxonomic character of the structured inheritance net facilitates the processin~ involved in analyzing and responding to an utterance. In particular, (I) it helps guide parsing by ruling out semantically meaningless paths, (2) it provides a general way of organizing and invoking semantic interpretation rules, and (3) it allows algorithmic determination of equivalent sets of entities for certain plan-recognition inferences. Second, KLONE's representational structure captures some of the subtleties of natural lanKuage expression. That is, it provides a general way of representing exactly the quantificational import of a sentence without over- committing the interpretation to scope or multiplicity not overtly specified. The paper first presents a brief overall description of the natural language system. Then, prior to describing how we use KLONE in the system, we discuss some of the language's features at a general level. Finally we look in detail at how KLONE affords us the advantages listed above. 1. THE TASK AND THE SYSTEM Generally speaking, we want to provide a natural interface to a subsystem that knows how to present conceptual information intelligently (on a bit-map dis- play) in this case the Augmented Transition Network (ATN) grammar from bae LUNAR system [5]. The informa- tion presentation subsystem allows flexible specifica- tion of coordinate system mappings, including rectangu- lar windows, from parts of the ATN onto a sequence of "view surfaces". Object types can be assigned arbitrary presentation forms (graphic or alphanumeric), which can be modified in particular cases. Parts of the grammar are displayed according to standing orders and special requests about shape and projection. Our task is to command and control the intelligent graphics subsystem through natural language. For example, a sample dialogue with the system might include this sequence of utterances: (I) Show me the clause level network. [System displays states and arcs of the S/ network] (2) Show me S/NP. [System highlights state S/NP] preverbal states] (4) No. I want to be able to see S/AUX. [System "backs off" display so as to include state S/AUK] At the same time, we would like to ask factual questions about the states, arcs, etc. of the ATN (e.g. "What are the conditions on this <user points> arc?"). Ouestions and commands addressed to the system typically (I) make use of elements of the preceding dialogue, (2) can be expressed indirectly so that the surface form does not reflect the real intent, and (3) given our graphical presentation system, can make reference to a shared non- linguistic context. The issues of anaphora, (indirect) speech acts, and deixis are thus of principal concern. The natural language system is organized as illustrated in Figure I a. The user sits at a bit-map terminal mi~'ti,l' ot~v~l + /T~X~ ~p~r . . . . . . . . . . . . . . . , ,J'--/ Figure I. System structure (highlighting types of knowledge involved). equipped with a keyboard and a pointing device. Typed input from the keyboard (possibly interspersed with coordinates from the pointing device) is analyzed by a version of the RU_~S System [2] ~ an ATN-based increment- al parser that is closely coupled with a "case-frame dictionary". In our system, this dictionary is embodied in a syntactic taxonomy represented in KLONE. The parser produces a KLONE representation of the syntactic structure of an utterance. Incrementally along with its production, this syntactic structure triggers the creation of an interpretation. The interpretation structure -- the literal (sentential) semantic content of the utterance -- is then processed by a discourse expert that attempts to determine what was really meant. In this process, anaphoric expressions must be resolved and indirect speech acts recognized. Finally, on the basis of what is determined to be the intended ~orce of (3) Focus in on the preverbal constituents. [System shifts scale and centers the display on the a Dashed elements of the figure are proposed but not yet implemented. 33 the utterance, the discourse component decides how the system should respond. It plans its own speech or display actions, and passes them off to the language generation component (not yet implemented) or display expert. Some of these operations will be discussed in more detail in Section 3. 2. THE REPRESENTATION LANGUAGE Before we look at details of the system's use Of KLONE, we briefly sketch out some of its cogent features. )CLONE is a unifom language for the explicit representation of natural language conceptual information based on the idea of structured inheritance networks [3]. The principal representational elements of ~ONE are Concepts, of which there are two major types -- Generic and Individual. Generic Concepts are arranged in an inheritance structure, expressing long-term generic knowledge as a taxonomy a. A single Generic Concept is a description template, from which individual descriptions (in the form of Individual Concepts) are formed. Generic Concepts can be built as specializations of other Generic Concepts, to which they are attached by inheritance Cables. These Cables form the backbone of the network (a Generic Concept can have many "superConcepts" as well as many "subConcepts"). They carry structured descriptions from a Concept to its subConcepts. KLONE Concepts are highly structured objects. A subConoept inherits a structured definition from its parent aa and can modify it in a number of structurally consistent ways. The main elements of the structure are Roles, which express relationships between a Concept and other closely assooiatnd Concepts (i.e. its properties, parts, etc.). Roles themselves have structure, including desoriptlons of potential f i l l e r s eee, modality lnfomation, and names aaee. There are basically two kinds of Roles in )O.ONE: RoleSets and IRoles. RoleSets have potentially many fillers e~.g. the officer Role aeaea of a particular COMPANY would be filled once for each officer). A RoleSet has as part of its internal structure a restriction on the number of possible fillers it can have in any particular instance. A RoleSet on an Individual Concept stands for the particular set of fillers for that particular concept. An IRole (for Instance Role) appears on an Individual Concept to express the binding of a particular value to the Role it plays in that Concept. (There would be exactly one IRole for each officer slot of a particular company, resardless of the actual number of people playing those roles.) There are several inter-Role relationships in KLONE, which relate the Roles of a Concept to those of s sdperConcept. Such relationships are carried in the inheritance Cables mentioned earlier. They include - restriction (of f i l l e r description and number); e.g. that a particular kind of COMPANY will have exactly three officers, all ot whom must be over ~5; this is a relationship between RoleSets, in which the more restricted RoleSet has all of the properties of the one it restricts, with its own local restrictions added conjunctively; - differentiation (of a Role into subRoles); e.g. differentiating the officers of a COMPANY into president, vice-president, etc.; this is also a relationship between two RoleSets carrying inheritance -- the more specific Roles inherit all properties of the parent Role except for the number restriction; - particularization (of a RoleSet for an Individual Concept); e.g. the officers of BBN are all COLLEGE-GRADUATEs; - satisfaction (binding of a particular filler description into a particular Role in an Individual Concept); e.g. the president of BBN is STEVE-LEW: this iS the relationship between an IRole and its parent RoleSet. Figure 2 illustrates the use of Cables and the structure t The network is a partial ordering with a topmost element -- the Concept of an INDIVIDUAL -- below which all other Concepts appear. There is no "least" element in the net, whose fringe is composed of Individual Concepts not related to each other. e, This inheritance implies inter alia that, if STATE is a subConcept of ATN-CONSTITUENT, then any particular state is by definition also an ATN constituent. • ee These limitations on the fom of particular fillers are called "Value Restrictions" (V/R's). If more than one V/R is applicable at a given Role, the restrictions are taken conjunctively. • ,ae Names are not used by the system in any way. They are merely conveniences for the user. ,mess In the text that follow, Roles will be indicated as underlined names and Concepts will be indicated by all upper case expressions. Figure 2. A piece of a KLONE taxonomy. of Concepts in a piece of the KLONE taxon¢fay for the ATN grammar, In this figure, Concepts are presented as ellipses (Individual Concepts are shaded), Roles as small squares (IRoles are filled in), and Cables as double-lined arrovJ. The most general Concept, ATN-CONSTITUENT, has two subConcepts -- STATE and ARC. These each inherit the general properties of ATN constituents, namely, each is known to have a 34 displayForm associated with it. The subnetwork below ARC expresses the classification of the various types of arcs in the ATN and how their conceptual structures vary. For example, a CONNECTING-ARC has a nextState (the state in which the transition leaves the parsing process), while for POP-ARCs the term is not meaningful (i.e. there is no nextState Role). Links that connect the Roles of more specific Concepts with corresponding Roles in their parent Concepts are considered to travel through the appropriate Cables. Finally, the structure of an Individual Concept is illustrated by CATARC#0117. Each IRole expresses the filling of a Role inherited from the hierarchy above -- because CATARC#0117 is a CAT-ARC, it has a category; because it is also a CONNECTING-ARC, it has a nextState, etc. The structure of a Concept is completed by its set of Structural Descriptions (SD's). These express how the Roles of the Concept interrelate via the use of parameterized versions ("ParalndividJals") of other Concepts in the network to describe quantified relations between the ultimate fillers of the Concept's Roles. The quantification is expressed in terms of set mappings between the RoleSet3 of a C~ncept, thereby quantifying over their sets of fillers. In addition to quantified relations between potential R~le fi]lers, simple relations like subset and get equality can be expressed with a special kind of SD ~:alled a "RoleValueMap" (e.g. the relation that "the object of the precondition of a SEE is the same as the object ~f its effect"). SD's are inherited through cable~ and are particularized in a manner similar to that of Roles. There is one important feature of KLONE that I would like to point out, although it is not yet used in the natural language system. The language carefully distinguishes between purely descriptional structure and assertions about coreference, existence, etc. All of the structure mentioned above (Concepts, Roles, SD's and Cables) is definitional. A separate construct called a Nexus is a LJsed as a locus of coreference for Individual Concepts. One expresses coreference of description relative t~ a Context by placing a Nexus in that Context and attaching to it Individual Concepts considered to be coreferential. AI] assertions are made relative to a Context, and thus do not affect the (descriptive) taxonomy of' generic knowledge. We anticipate that Nexuses will be important in reasoning about particu- lars, answering questions (especially in deciding the appropriate form for an answer), and resolving anaphoric expressions, and that Contexts will be of use in reasoning about hypotheticals, beliefs, and wants. The final feature of KLONE relevant to our particular application is the ahility to attach procedures and data to structures in the network. The attached procedure mechanism is implemented in a very general way. Proce- dures are attached to k'LONE entities by "interpretive hooks" (ihooks), which specify the set of situations in which they are to be triggered. An interpreter function operating on a KLONE entity causes the invocation of all procedures inherited by or directly attached to that entity by thooks whose situations match the intent of that f.~nction. Situations include things like "Individuate", "Modify", "Create", "Remove", etc. In addition to a general situation, an ihook specifies when in the executinn of the interpreter function it is to be invoked (PRE-, POST-, or WHEN-). 3. USE OF KLONE IN THE NATURAL LANGUAGE SYSTEM The previous section described the features of KLONE in general terms. Here we illustrate how they facilitate the performance of our natural language system. (Figure I above sketched the places within the system of the variou~ KLONE knowledge bases discussed here.) We will discuss the use of a syntactic taxonomy to constrain parsing and index semantic interpretation rules, and structures used in the syntactic/discourse interface to express the literal semantic content of an utterance. The parser uses KLONE to describe potential syntactic structures. A taxonomy of syntactic constituent descriptions, with C~ncepts like PHRASE, NOUN-PHRASE, LOCATION-PP, and PERSON-WORD, is used to express how phrases are built from their constituents. The taxonomy also serves as a discrimination net, allowing common features of constituent types to be expressed in a single place, and distinguishing features to cause branching into separate subnets. Two benefits accrue from this organization of knowledge. First, shallow semantic constraints are expressed in the Roles and SD's of Concepts like LOCATION-PP. For example, the prepObject )f a LOCATION-PP must be a PLACE-NOUN. A description of "on AI" (as in "book on AI") as a LOCATION-PP c~Id not be constructed since AI does not satisfy the value restriction for the head role. Such constraints help rule out mislead in 8 parse paths, in the manner ~f a 3emantic grammar [4], by refusing to construct semantically anomalous constituent descriptions. In conj~..tion with the general (ATN) grammar of English, this is a powerful guidance mechanism which helps parsing proceed close to deterministically [2). Second, the syntactic taxonomy serves as a structure on which to hang semantic projection rules. Since the taxonomy is an inheritance structure, the description of a given syntactic constituent inherits all semantic interpretation rules appropriate for each of the more general constituent types that it specializes, and can have its own special-purpose rules as well. In the example above, simply by virtue of its placement in the taxonomy, the Concept for "on AI" would inherit rules relevant to PP's in general and to SUBJECT-PP's in particular, but not those appropriate to LOCATION-PP's. Interpretation per se is achieved using the attached procedure facility, with semantic projection rules expressed as functions attached to Roles of the syntac- tic Concepts. The functions specify how to translate pieces of syntactic structure into "deeper" Concepts and Roles. For example, the subject of a SHOW-PHRASE might map into the a~ent of a DISPLAY action. The mapping rules are triggered automatically by the KLONE interpreter. This is facilitated by the interpreter's "pushing down" a Concept to the most specific place it can be considered to belong in the taxonomy (using only "analytic", definitional constraints). Figure 3 illustrates schematically the way a Concept can descend to the most specific level implied by its internal description. The Concept being added to the network is an NP whose head is "ARC" and whose modifier is "PUSH" (NP@OO23). It is initially considered a direct (Generic) subConoept of the Concept for its basic syntactic type (NP). Its Role structure, however, implies that it in fact belongs in a more restricted subclass of NP's, that is, TYPED-ARC-NP (an NP whose head is an ARC-NOUN and whose modifier is an ARC-TYPE-WORD). The interpreter, on the basis of only definitional constraints expressed in the network, places the new Concept below its "most specific subsumer" -- the proper place for it in the taxonomy. The process proceeds incrementally, with each new piece of the constituent possibly causing further descent. In this case, NP@O023 would initially only have its head Role specified, and on that basis, it would be placed under ARC-NP (which is "an NP whose head is an ARC-NOUN"). Then the parser would add the modifier specification, causing the Concept's descent to the resting place shown in the right half of Figure 3. When the constituent whose description is being added to the network is "popped" in the parser, its IOL.ONE descriptiom 35 Figure U. XLONE description of glgure 3. Automatic Concept descent. is indtvidueted -- causing the invocation of all "WHEN- Individuated" attached procedures inherited through superconcept Cables. These procedures cause an interpretation for the constituent to be built on the basis of the interpretations of component parts of the syntactic description. This IAteral semantic interpretation of a phrase -- also a KLONE structure -- is the "input" to the discourse component. An important element of this interface between the syntactic processor and the discourse component is that the parser/interpreter commits itself only to information explicitly present In the input phrase, and leaves all inference about quantifier scope, etc. to the discourse expert. Two kinds of representa- tional structures support this. The Concept O3[T (for "determined set") is used extensively to capture sets implicit in noun phrases and clauses. ~EYs use the inherent multiplicity of RoleSets to group together several entities under a single Concept, and associate determiners (deCinlte/indeflnite, quantifiers, etc.) with such a set of entities. A DSET can express the characteristics of a set of entities without enumerating them explicitly, or even indicating how many members the set is expected to have. RoleYalueMaps a11ow ,constraints between DSETs to be expressed in a general way -- a RoleValueMsp expresses a subset or equallty relation between two RoleSets. Such relations can be constructed without knowlng in advance the csrdinallty of the sets or any of their members. Figure 4 illustrates the use of these structures to express the intent of the sentence, "Show me states S/NP, S/AUX, and S/DCL "e. DSET#O035 represents the interpretation of the noun phrase, "the states ~/HP, S/AUX, and ~/DCL". The generic DSET Concept has two Roles, mamb~r and determiner. The member Role can be filled multiply, and therein lies the "settedness" of the []SET. [~ET#O035 has a particularized version of the • RoleSets in this figure are drawn as squares with circles around them. RoleSets with filled-in circles are a special kind of particularized RoleSet that can occur only in Individual Concepts. The RoleValueMap is pictured as a diamond. "Show me states S/NP, S/AUX, and S/DCL". member Role: Role R1 represents the set oC three states mentioned in the noun phrase, as a group. Thus, the Value Restriction of R1, STATE, applies to each member. The three 1Roles of DSETIO035, connected by "Satisfies" links to the particularized member RoleSat, indicate that the particular states are the members of the set e. The other DSET in the figure, r~ETmO037, represents the clause-level structure of the sentence. The clause has been interpreted into something like "the user has performed what looks on the surface to be a request for the system to show the user some set oC states". This captures several kinds of indeterminacy: (1) that the sentence may only be a request at the surface level ("Don't you know that pl&s can't fly?" looks like a request to inform), (2) that there is more than one way to effect a "show n ("show n could mean redraw the entire display, change it slightly to include a new object, or simply highlight an existing one), (3) that it is not clear how many operations are actually being requested (showir~ three objects could take one, two, or three actions). TherefOre, the interpretation uses Generic Concepts to describe the kind of events appearing in the surface form of the sentence and makes no ccmmitment to the number of them requested. The only commitment to "quantiflcetionel" information ls expressed by the Role- ValueMap. Its two pointers, X (pointin& to the member Role of nSET#O035) and yea (pointing to the object of • The Value Restriction. STATE, is redundant here, since the members of this particular set were explicitly specified (and are known to be states). In other cases, the information is more useful. For example, no 1Roles would be constructed by the parser if the sentence were "Are there three states?"; only one would be constructed in "Show me state S/NP and its two nearest neighbors". On the other hand, no Value Restriction would be directly present on Role R1 if the noun phrase were just "S/NP. S/AUX, and S/DCL". ee ¥ is a chained pointer acing first through the member Role of ~SET~O037, then throu6h the act Role of S-R£QUEST~O038, and finally to the o~-ent Role of SHOWeO035. It is considered to refer to the set of ZRoles expressing the objects of all SHOW events ultimately S-REQUESTed, when it is determined exactly how many there are to be (i.e. when the 1Roles of 36 the requested act), indicate that the ultimate set of things to be shown, no matter how many particular SHOW events take place, must be the same as the set of members in the noun phrase DSET (namely, the three states). As mentioned, semantic interpretation invokes the discourse expert, This program looks to a plan that it is hypothesizing its user to be following in order to interpret indirect speech acts. Following [1], the speech acts REQUEST, INFORM, INFORMREF, and INFORMIF are defined as producing certain effects by means of the heater's recognition of the speaker's intention to produce these effects. Indirect speech act recognition proceeds by inferring what the user wants the system to think is his/her plan. Plan-recognition involves making inferences of the form, "the user did this action in order to produce that effect, which s/he wanted to enable him/her to do this (next) action". Making inferences at the level of "intended plan recognition" is begun by analyzing the user's utterance as a "surface" speech act (SURFACE-REQUEST or SURFACE- INFORM) indicating what the utterance "looks like". By performing plan-recognition inferences whose :plausibility is ascertained by using mutual beliefs, the system can, for instance, reason that what looked to be an INFORM of the user's goal is actually a REQUEST to include some portion of the ATN into the display. Thus, the second clause of the utterance, "No; I want to be able to see S/AUX," is analyzed as a REQUEST to INCLUDE S/AUX by the following chain of plan-recognition inferences: The system believes (1) the user has performed a SURFACE-INFORM of his/her goal; thus (2) the user intends for the system to believe that the user wants to be able to see S/AUX. Since this requires that S/AUX be visible, (3) the user intends for the system to believe that the user wants the system to plan an action to make S/AUX visible. Because the "No" leads to an expectation that the user might want to modify the display, the system plans to INCLUDE S/AUX in the existing display, rather than DISPLAY S/AUX alone. (q) Hence, the user intends for the system to believe that user wants the system to INCLUDE S/AUX. (5) The user has performed a REQUEST to INCLUDE. The system responds by planning that action. In addition to using Contexts to hold descriptions of beliefs and wants, the plan-recognition process makes extensive use of RoleValueMaps and ~SETs (see Figure 4). Plan-recognition inferences proceed using Just the clause-level structur~ and pay no attention to the particulars of the noun phrase interpretations. The system creates new BSETs for intermediate sets and equates them to previous ones by RoleValueMaps, as, for example, when it decides to do a SHOW whose object is to be the same as whatever was to be visible. At the end of plan-recognltion the system may need to trace through the constructed RoleValuaMaps to find all sets equivalent to a given one. For instance, when it determines that it needs to know which set of things to display, highlight, or include, it treats the equated RoleValueMaps as a set of rewrite rules, traces back to the original noun phrase DSET, and then tries to finds the referent of that DSET a. DSET#OO37 are finally specified). Thus, if there are ultimately two SHOWs, one of one state and the other of two, the Y pointer implicitly refers to the set of all three states shown. e The system only finds referents when necessary. This depends on the user's speech acts and the system's needs in understanding and complying vith them. Thus, it is Finally, not only are parse structures and semantic interpretations represented in KLONE, but the data base -- the ATN being discussed -- is as well (see Figure 2 above). Further, descriptions of how to display the ATN, and general descriptions of coordinate mappings and other display information are represented too. Commands to the display expert are expressed as Concepts involving actions like SHOW, CENTER, etc. whose "arguments" are descriptions of desired shapes, etc. Derivations of particular display forms from generic descriptions, or from mapping changes, are carried out by the attached procedure mechanism. Finally, once the particular shapes are decided upon, drawing is achieved by invoking "how to draw" procedures attached to display form Concepts. Once again, the taxone~mic nature of the structured inheritance net allows domain structure to be expressed in a natural and useful way. Acknowledgements The prototype natural language system was the result of a tremendous effort by several people: Rusty Bobrow was responsible for the parser and syntactic taxonomy, although his support in design and implementation of [CLONE was as extensive and as important; Phil Cohen designed and built the discourse/speech act component that does all of the inference in the system; Jack Klovstad did the graphics, building on an existing system (AIPS) built by Norton Greenfeld, Martin Yonke, Eugene Ciccarelli, and Frank Zdybel. Finally, Bill Woods built a pseudo-English input parser that allowed us to easily build complex KLONE structures with a minimum of effort. Many thanks to Phil Cohen, Candy Stdner, and Bonnie Webber for help with this paper. This research was supported by the Advanced Research ProJects Agency of the Department of Defense and was monitored by ONR under Contract No. N0001~-77-C-0378. CI] 3? [2] [3] References [q] • C5] Allen, James F. A Plan-baaed Approach to Speech Act Recognition. Technical Report No. 131/79. Toronto, Ontario: Dept. of Computer Science, University of Toronto, February 1979. Bobrow, R. J. The RUB System. In Research in Natural Language Understanding: Quarterly Progress Report No. 3 (1 March 1978 to 31 May 1978). BBN Report No. 3878. Cambridge, HA: Bolt Beranek and Newman Inc., July 1978. Braehman, R. J. A Structural Paradigm for Representing Knowledge. Ph.D. Dissertation, Harvard University, Cambridge, HA, Hay 1977. Also BBN Report No. 3605. Cambridge, HA: Bolt Beranek and Newman Inc., May 1978. Burton, R. R. Semantic Grammar: An Engineering Technique for Constructing Natural Language Understanding Systems. BBN Report No. 3q53. Cambridge, MA: Bolt Boranek and Newman Inc., December, 1976. Woods, W. A., Kaplan, R. M., and Nash-Webber, B. The Lunar Sciences Natural Language Information System: Final Report. BBN Report No. 2378. Cambridge, MA: Bolt Beranek and Newman Inc., 1972. intended that a naming speech act like "Call that the complement network" will not cause a search for the referent of "the complement network".
1979
9
ON THE SPATIAL USES OF PREPOSITIONS Annette Herskovlts Linguistics Department, Stanford University At first glance, the spatial uses of prepositions seem to constitute a good semantic domain for a computational approach. One expects such uses will refer more or less strictly to a closed, explicit, and precise chunk of world knowledge. Such an attitude Is expressed in the following statement: "Given descriptions of the shape of two objects, given their location (for example, by means ox coordinates in Some system of reference), and, In some cases, the location of an observer, one can select an appropriate preposition." This paper shows the fallacy of this claim. It addresses the problem of interpreting and generating "locative predications" (expressions made up of two noun-phrases governed by a preposition used spatially). It identifies and describes a number of object characteristics beyond shape (section I) and contextual factors (section 2) which bear on these processes. Drawing on these descriptions, the third section proposes core meanings for two categories of prepositions, and describes some of the transfor~ttons these core meanings are subject to in context. The last section outlines the main directions of Inquiry suggested by the examples and observations in the paper. 1. ~BJECT CHARACTERISTIC~ Throughout the paper, I use the term "object ~, meaning, strictly speaking, the object together with some lextcal label. In effect, the choice of preposition depends on the lexical category associated with the object by the noun-phrase used to refer to It. And such a category is not uniquely defined. There are different levels In the categorization hierarchy (e.g. "end table", "table", "piece of furniture'), but also different perspectives on the object. Consider the picture below. ,~, , ~, That patch of grass could be referred to alternately as a front.yard, a larun, grass, a patch of pass, etc. (to assume that these phrases refer to the sane object, one must see the grass as a metonymlc substitute for tMs patcA of grass, and the front.yard as some "area" rather than a "slice" lncludln~ air above and ground under; neither view is unreasonable). The permissible prepositions, and their interpretation, vary with each referring phrase: compare in/on the ~ass, ~nion tA~ patcA of ~as~, inK*on) tan front.~:rd, onl(*~n) the /reran [Fillmore 1971]. With this warning, I will go on speaking of "object characteristics', "object identity", etc. Some of the object characteristics used in production and interpretation can be computed from the shape of the objects -- the axes of symmetry (needed for across tan road and along tan ro~d), the "top surface" (on t&e label), the ~outllne" (tA, #ird in t~ tree), etc. (for a description of some of these characteristics, and of their role In comprehension, see [Boggess 1979]). Other characteristics are not deducible from shape. These include: 1. 1, ALTERNATE GEOM~'I'RIC DESCRIPTIONS Objects identical in shape may be "conceived m differently, for instance as surface or as enclosure. This may be a choice available to the speaker to emphasize certain aspects (inlon t~$e rug), or It may be determined for the category of the reference object (on tAe football field). In under t/te aJater, the water stands for tAe upper free surface of tAn ruater; in in the water, i t .is conceived as a volume. A whole category of objects follows this rule: see (undtrlln) tan (snowllakeloceanlsan41.). Such objects tend to be viewed only as volumes with "underneath': undcrn~A fat lake is generally interpreted as meaning "under the lower surface of that body of water'. In tan crack In tan 6~wl, the crack Is In the volume defined by the normal surface of the bowl in Its uncracked state. In tan milk in tat bowl, the milk Is In the volume enclosed by the bowl and limited upward by a plone through the rim. 1.2. FUNCTION One says in ran disk and on the tray though these objects uy be essentially Identical in shape. One will not ordinarily say tan cat Is in :At t~e, but un~r tan t~e, even with the cage-like table below. /oAn is et X Often means that John Is using I as one normally uses it (JoAn is at his desk). If normal use implies being on or m X, then at Is not used (John is in or on the bed, but not at). And to the right of the chadr Is defined by reference to a typical user of the chair. 1.3. TYPICAl, PHYSICAl, OR GEOMETRICAL CONTEXT When using in with areas, it Is not sufficient that the reference object be two-dimensional; that object must be part of a surface divided into cells. One does not draw a line in a blackboard; but in tat nlargin is acceptable, because the margin is a subdivision of a page. In the same fashion, ' geographical areas (England, tat county, etc.) are sections of a divided surface. Some objects are exclusively conceptualized as parts of a "cell structure" and cannot then follow at (*at his room, *at England). Other objects can be conceptualized both as elements of a cell structure (in the village), or as one of a set of separate places (at tat ~illage). Or consider ~ard: when 1~ is a part of the grounds of a house, one is restricted to In. But of somebody working In a Junkyard, one could say he is at the )¢rd, reflecting a view of the yard as one of a set of separate locations. If a door is in its typical context, i.e. part of a wall, then interpretation of m tk~ right of Me d~r must be based on the door's own axes. Otherwise (In a hardware store for example) an observer's line of sight my override the door's cross-axis. 1.4. RELATIVE MOBILITY The mobility of the reference object relative to the located object influences the order of the nominals around the preposition: the more mobile object normally precedes the preposition. One will not say t/ur ~&,n~ bot~t i~ tam one in a cap, but tke one ~dk a ~p on it. Following Tally (1978a], I will call the located object the "Figure", and the reference object the "Ground", when discussing the order of the nominals. Human beings tend to play havoc with the relative mobility rule, either because they are the preferred topic (flee man i~ a Nu~ coat), or -- as center of the universe -- preferred reference object (tke EmpOe 3t~e building i~ in front of me). Typicality plays an ilportant role in deterllnlng an appropriate locative predication (and no doubt other types of expressions). The choice of expression tends to depend not on particular (non prototypicaI) attributes of the objects considered, but rather on typical attributes of the category to which they are assigned by the linguistic expression. If typical conditions do not obtain, they tend to be ignored, unless one has sole special reason to bring attention to the atypical conditions. If for Instance the cap of a bottle were glued to the wall, one would still say tk~ bo~tit wLtk a cap on it. Even If a tray has very high sides, one will say on the tr¢ 7. Consider also the table pictured above. Imagine the space under it progressively more solidly enclosed; there Is s point at which one might be struck by this and say in tat t<d~le. But this point is rather far along; even with a table with a solid sheZt at floor level, people consistently describe objects on that shelf as unde~tkt ta~e. 2. ~QHTEXTUAL FACTORS The choice of an appropriate locative predication 41st depends on various aspects of the context. Some o~ these contextual characteristics are discussed In this section as. if they were neatly separable; in fact, all are interdependent in complex ways, and these lnterrrelations must become clear before we can design models of comprehension and production. 2. I. CONTEXT DEPENDENT PARAMETERS. These Include the location of an observer, for the deictic uses of some preposltlons ~n fro~t oJ'lkt try), and an implicit (fuzzy) distance threshold for the prepositions indicating proximity (Denofsgy 1975]. ]n the gas-sfat~ ~s at fat freta~rJ, an implicit cross-path is assumed. To say that "freeway" occurs as a letonylic substitute for "at the intersection of a cross-rood with the freeway" is not very useful, since no general rule of metonymy will predict this one (as natural as such a substitution may sound to English speakers, it is not acceptable in French: see ~t poste ~es~tct ~t ~ la route). 2.2. F}GURE/GROUND AS 1GNM~T The assignment of the roles of Figure and Ground depends pri~rily on which of the two objects' location Is at issue. The object whose location is at issue precedes the preposition: compare the tenue nt~zr tam cku~k and fA, ¢kurcA n~r tat kouJt. BUt the assigment must also respect the relative mobility.rule. TAt kouJe n~r far ~urck is reversible because both house and church are equally immobile; but tam ~cycle near tam ¢ku~h Is not. When one wishes to locate a less mobile object with respect to a more mobile one, : there are a number of periphrastic devices -- one" being the use of "with ~ as in the earlier example (tat bottlt with ¢ ~p on ~t); "with", not being basically locative, Is not subject to this relative mobility rule. See also t/~ /~se is n~r wk~t tat ~¢~t Is (but *tat ko~e is ne¢r the ~¢')cie ~almy 1978aj); this turns b~jc/e Into an immovable entity, namely a piece. The mobility rule Is In fact a consequence of the principle that the object whose location is at Issue should precede the preposition. The Ground is typically bigger and less mobile than the Figure, since those objects whose location is most commonly at issue are those which move around, and a good reference object is one whose location can be Inferred from Its name, and thus had better be the sase over some time. What Is at issue in turn depends on the speaker's purpose in constructing the locative predication, and how It fits Into his/her overall discourse plan. 2. 3. VARYING V|EWPO[NT ON THE OBJECT Mainly this Involves the contrast between a close-up and a remote view of the objects. Most often, this is not a Batter of actual distance, but s way of viewing an object for a given purpose: one Jay choose to ignore one or more dimensions, or lnternal characteristics of the object. For example, a road uy be seen as a strip (a truck On tke ro¢d), or a line (a oUlateon Mr ro~L~don). Normally beMnd tar kouJt viii be based on the house's own axes. But when looking from some distance, one My use one's line of sight as axis. Another aspect of viewpoint, is the bounded/unbounded distinction. Compare w~ng f~rougk versus across the ~at~ [Talmy 1978b]: in the former the boundaries of the body of water are Ignored, but in the latter, the extension of the body of water from one end to another is involved. 2.4. RELEVANCE Give= the pictures below, one will say t/~ ~,~d un~r tat bomt, but rht bulb in Mt s~k~. The socket is still functioning as a socget when facing down, the bowl not as a bowl. If function is the relevant aspect, it Is of no Interest to distinguish between situations where bulb and socket are as above, and their upside down versions. With the bowl, this distinction matters. Similarly, the pear in 6 is m tke ~I. It is not normally useful to distinguish between situation A and A E For the two examples Just described, one could contrive Contexts in which the distinctions norsally ignored would be important. And certainly adequate lOdels of language should account for this possibility. A locative expression may describe the general intention of a per}on over some time, rather than his precise location at the tlme of speaking. I could say L~nn is.at.t~e store even if l knew .Lynn might still be on her way. But this may not be appropriate (e.g. if I know the addressee is at the store). Relevance is important for Grlcean inferences. For instance, from /on is near his desk, one can generally infer ion t~ not at his desk. If however I asked a friend on the phone "are you near your desk? could you look up the address...", an appropriate answer is "yes", even if my friend is at his/her desk. In ~hls context proximity is the relevant aspect, and "near" becomes appropriate. 2.5. SALIENCE The book below left is on the table, the lid (right) is not, because the intervening relation between the lid and the jar is salient. Such salience Is not primarily a umtter of the size of some intervening object. ~#.book ~5'l~lid One generally says that X is tn eke field and in the ~mi, whenever field or bowl contain X. One ~ay however say the dust on the ~ml, and the fertilizer on the ~eid. An adhering thln lamina brings attention to contact rather than inclusion. 2.6. HIGHLIGHTING SOME BACKGROUND ELEMENT The choice between expressions is often a matter of bringing attention to some. background element rather than signalling differences of fact. Thus to tke right, as contrasted with on the rigM, tends to highlight the distance between the two objects, and to evoke travel away from the reference object; the contrast cannot always be described in terms of objective differences in the situations (it sometimes is: thus if a third object of the same kind Is between the two considered, only to the right is appropriate). And on the right side of the bu~lW~ng as contrasted with on the rigkt ~ the b~l~ng brings attention to the wall. Consider also Bogota is melon the equator; "at" will be preferred If one wishes to signal the presence of some transverse line (e.g. a travel traJectory~. 2. 7. INDETERMINACY Most spatial relations are true given a certain tolerance. The tolerance has a lower limit defined by the nature of the objects; its effective value depends on one's purpose, and the precision of one's knowledge. Thus, the angular precision with which ~r~tly to the ~gM is defined varies with silverware on the table, chess pieces on a board, or houses on a block. 2.8. CONTRAST "Polar concepts", i.e. terms like to the ngkG may behave llke implicit comparatives. In some sense, to the right is better realized the closer the located object is to the "right axis". Thus, if I said put the ckair to the righ! of tke desk, I would expect you to put it more or less on the rlght axis of the desk. And, in the figure below, A ls to tAe right of B only in the absence of C. The location of A must be contrasted with that of similar objects in the relevant part of space. (One could however say here: A ls to the right and behind B, or A is ~agonally to the right of B. This suggests that even in the presence of C, A is to the algae of B is true, but "uncooperative" [Grice 1974]. However, It is "uncooperative" precisely because of some intrinsic property of the concept to the right -- l.e. because ".closer to the axis" is in some sense a better way tO realize to the right. Even If one grants some usefulness to the semantic/pragmatic distinction, it does not neatly apply here.) A similar use of contrast can be seen with the ch.,~r is In the corner in the figure below. It is not appropriate unless the armchair be removed. ,@.-.cha i r The concept of a "corner" has built In that In the corner becomes less appropriate as one gets further from the vertex itself, 2. 9. (~THERS Many uses of the prepositions cannot be explained in terms of any of the above factors. One then needs a description of the context of use at a rather specific level. Consider for example the contexts in which one will say Suzy is at the playground versus in the playp'ound. In would be (i) preferred if the speaker can see Suzy, (il) required If the addressee expects Suzy to be Just outside the playground, (tit) required if the speaker her/himself Is in the playground (an analogous contrast exists between at the be~k and on the beach). These conditions "suggest" a close-up view, and that the speaker's knowledge is precise; by contrast, "at" suggests a remote view, and imprecise knowledge. But "to suggest" is not to imply: one cannot infer these conditions of use from the ideas of a remote versus a close-up view. 3. COR~ MEANIN~ With most of the examples given, the explanation suggested for the choice of a preposition assumes the existence of a "core meaning". This core meaning is basically a geometrical relationship between geometrical entities. Thus, in a given context, "geometrical descriptions" (say a point, line, surface, volume, lamina, etc.) are mapped onto the subject and object of the preposition. Strictly speaking, the core meanings are -- at best -- true only of these geometric descriptions. In fact, they may not even hold for any such geometric description -- see the pear in ¢ ~ml example above, assuming the natural core meaning for "lne, i.e. "inclusion". Yet, the core meaning is then present as "prototype". Here are informal definitions of the core meanings for two categories of prepositions, designated as "topological" (at. on. in), and "projective" (to the right, be~nd, etc.). Topological prepositions: in: partial inclusion of a geometrical construct in a volume, an area, or a line. on: contiguity, adjacency of a geometrical construct with a surface, or a line. ~: coincidence of a point with a point in space. In actual context, inclusion, contiguity, and coincidence need not be true. Thus the book on top of a pile of books on the table can be said on tie ta~e, and Mar~ ts eJ the gate when very close to It. But the relations represent the "ideal" around which particular instances gravl'tate. Thus at Implies the closest reasonable relationship between two objects, and coincidence where sensible (tie cent~oftke drclems attkeint~s~t~onoftke axe). Of course, the core meanings are not sufficient to determine the conditions of use of a given preposition: one must also know precisely which deviations from the ideal are permitted. One principal process Jedlattng between core meanings and actual conditions of use Is the mapping of objects onto points, surfaces, and volumes. I am not saying that the core meanings presented here are the only possible ones. Only when core meanings are Incorporated in a global explanatory system will It be possible to make rigorous arguments for alternate choices. Those proposed here represent a good starting point. Projective prepositions: Each of these prepositions involves -- through fact, supposition, or metaphor -- a "point of observation', k point of observation consists of two vectors, one Indicating the intrinsic vertical of the observer (it will not be the gravitational vertical If the observer Is lying down, or not In the gravitational field), and the other orthogonal to the first along the line of sight. These two vectors completely specify four coplanar ortho$onal half-line axes associated with the point of observation: the "front", "right", "back", and "left" axes, in clockwise order. In the core meaning definition of these prepositions, reference and located objects are points. Given a point of observation, one can specify axes aSSOCiated with the reference object -- the "base axes" (right, left, front, and back) by reference to which to the ri~w,kt, be~nd, etc., will be defined. These axes originate at the reference object. If the point of observation (PObs) coincides with the reference object (PRef) (figure A below), the base axes are Identical to those of the point of observation. If the point of observation Is away from the reference object (figure B), the base axes are a mirror iMge of those of the point of observation -- the mtzror plane being the bisector of the segment joining point of observation to reference object. from/pRef !eft4='il~'~ right bmCk~pot~ s ba~//P"" f Deft@right fr?nt B There are thus two possible orders of the base axes, as shown In A and B. I wlll define the core meaning of each projective preposition as follows: given a punctual reference object (PRef), punctual located object (PLoc), and a point of observation, base axes can be constructed according to the procedure outlined above: PLoc Is to tie left. of PRef iff it is located on the left base axis. Analogous d~finitions for the other prepositions are easy to formulate. A few examples will help understand how these co're meanings are manifest in the actual uses of the prepositions. In in fremt of a r~ling st~me, the point of observation Is "vlrtual"-- l.e. It Is an hypothetical locatlon and direction for vlewlng: the location is coincident with the stone,, and the direction Is the direction of movement. One must of course assume -- as with the objects In tit= examples that follow -- that the stone Is aslmilated to a point. In to tie ~gkt of tle char, the base axes may be specified as those intrinsic to the chair -o I.e. by reference to a typical user. The point of observation is then coincident with the reference object o- namely the chair. Coincidence may be the case when the base axes are not Intrinsic to the reference object: for Instance on the ri[M of the sto0/ lay be defined with respect to somebody sitting on it, given a round stool with no intrinsic front axis. Reference object and point of observation are separated In the moon ~kind the clou~[. They can also be separated when the base axes are intrinsic to the reference object: wlth on t~ ri[kt side of tle closet, the point of observation Is defined by a typical user facing the closet. Again, one might define the core meanings differently. In .particular one could define the core meaning of "to the right" say, as implying location In the whole right-hand half-space instead of on the axis. The choice adopted here reflects the fact noted In earlier examples that the "ideal" realization of to the ngkt is with the located object on the right base axis. Processes other than the mapping of objects onto points may mediate between core meanings and actual conditions of use. The reference object amy rotate: where is on t~e right nd, r of the ~nting when the painting is tllted7 Tie tree to tie ri@kt of tie ro(~ actually means "at sob point of the road" -- think of a curving road), end the ~ ~nd tat barbed wirefence assumes "integration" along the length of the fence (note one cannot say t/~ty to tie ntkt oftkef~ce to the Same effect -- that is referring In thls way to the whole city. The line of sight Is a favored axis, as compared to right, left, and hack axes). 4. SOME CONCLUS|( .'S Here are the main problels and directions of inquiry suggested by the examples in this paper. One cannot fully explain the use of a locative predication in a given situation in terms of the core meanings together with some inferences from general principles involving object knowledge, salience, relevance, the precision desired, etc. There w111 always remain uses involving some degree of arbitrariness (most uses are "motivated", If not "compositional" [Flllmore Ig?9] -- i.e. the morphemes composing the appropriate expression are normally selected from "reasonable" candidates). Even where such principles are at play, they may operate not at the comprehension/production level, but rather at the phylogenetic level. To sort principled aspects of use from arbitrary ones, and to understand exactly where such principles operate, one must of course first establish their existence and nature. In terms of knowledge of the physical world, I believe one importan~ step. toward adequate representations is to put the experiencer back into the picture. That is, It is not enough -- or even always necessary -- to know where what objects are; one must also consider how much fits into one field of view, how things "appear" as opposed, to how they "are", how this changes with viewing distance, visibility, obstruction, etc. General principles of salience should ue studied: how some object parts or relations are selected as most important -- either in all imaginable contexts, or in some contexts. Salience underlies many Instances of metonymy (in at t~e front of tAe t~atre, "theatre" actually refers to the part occupied by the audience). Many'questions revolve around the issue of "relevance" -- of "what matters, to whom, in what circumstances" rather than the traditional concern with what is true. All existing artificial Intelligence programs have ignored this problem by using a limited vocabulary In a limited domain, so that the question of selecting relevant utterances never arises. Relevance is linked to the speaker's purpose, as uny of the contextual factors described in this paper -- indeterminacy, Gricean inferences, highlighting of background elements, determination of the Figure/Ground relationship, etc. The set of "expressible" 8oals is constrained by the "potential" of the language, i.e. by a semantic system with finitely many options. One can only want to say what can be said, and said in a reasonable amount of time. Clearly, "planning" for natural language processing is a very Important problem. Purpose however, will not explain everything one says. Simple associative mechanisms must sometimes be responsible for what one says. For instance, some background element may be highlighted -- provided Some linguistic means to do so exists -- only because some passive associative link has brought it to attention. Once general principles are better understood, It Is an open question whether they are used by speakers, or whether their explanatory power Is at the phyiogenettc level (and will thus be only Implicit in the structure of the knowledge representation). For instance, although there is a general princiole that the located object should be more mobile than the reference object, production may not proceed by inferences from this general principle together with scenarios Involving the two objects. The linguistic expression (or pattern for expressions) may be attached to some representation of a "situation type" involving the two objects (or two superordinate objects). And although "at" generally implies the closest reasonable relationship between two objects, such a definition may never be used by a speaker -- or used only In the creation or understanding of novel types of expressions, metaphors, witticisms, etc. What speakers do, and what conputer models of comprehension and production processes should be made to do, are two different things: the latter depends on the constructer's goals, which should be subjected to some scrutiny. A computational treatment of the use of prepositions will require much greater sophistication than naive representation theory would lead us to expect. REFERENCES Boggess, Lois C. 1979. Computational interpretation of English spatial prepositions. PHO thesis, University of Illinois, Urbana. Denofsky, Murray E. 1976. How near is near: a near specialist. M.~.T. Artificial I~telltgence Lab. memo no. 344, Cambrldge, Mass. Fillmore, Charles J. 1971. Santa Cruz lectures on detxls, presented at the University of California, Santa Cruz. Mimeographed, Indlan~ University Linguistics Club, Bloomington. , 1979. Innocence: a second idealization for linguistics. Pr~ecdings. FlftA Annual Mitring, Btrkel~ Llnguls~ 3oci¢~, 63-76. University of California, Berkeley. Grice, H. Paul. 1974. Logic and conversation. 3~ntaz and Semantic: ~pe~A o~ts, vol. 3, ed. by Peter Cole and Jerry L. Morgan, 41-$8. New York: Academic Press. Talmy, beonard~ 1978a. Figure and ground in complex sentences. Unluttsals of Human Language, vol. 4, ed. by Joseph H. Greenberg et' al, 625-649. "Stanford, Col: Stanford Univ..Press. 1978b. The Relation ot'gramur to cognition -- a synopsis. T~rtt~allssuts In Natur~ Language Pr~sin~2. 14-24. Coordinated Science Lab., University of illinois, Urbana-Champatgn.
1980
1
ON THE INDEPENDENCE OF DISCOURSE STRUCTURE AND SEMANTIC DOMAIN Charlotte Linde ~ J.A. Goguen + * I. THE STATUS OF DISCOURSE STRUCTURE Traditionally, linguistics has been concerned with units at the level of the sentence or below, but recently, a body of research has emerged which demonstrates the existence and organization of linguistic units larger than the sentence. (Chafe, 1974; Goguen, Linde, and Weiner, to appear; Grosz, 1977; Halliday and Hasan, 1976; Labov, 1972; Linde, 1974, 1979, 1980a,198Cb; Linde and Goguen, 1978; Linde and Labov, 1975; Folanyi, 1978; Weiner, 1979.) Each such study raises a question about whether the structure discovered is a property of the organization of Language or whether it is entirely a property of the semantic domain. That is, are we discov- ering general facts about the structure of language at a level beyond the sentence, or are we discovering particular facts about apartment layouts, water pump repair, Watergate politics, etc? Such a crude question does not arise with regard to sentences. Although much of the last twenty years of research in sentential syntax and semantics has been devoted to the investigat- ion of the degree to which syntactic structure can be described independently of semantics, to our knowledge, no one has attempted to argue that all observable regularities of sentential structure are attributable to the structure of the real world plus general cognitive abilities. Yet this claim is often made about regular- ities of linguistic structure at the discourse level. In order =o demonstrate that at leas= some of the structure found at the discourse level is independent of the structure of the semantic domain, we may show that there are discourse regularities across semantic domains. As primary data, we will use apartment layout description, small group planning, and explanation. These have all been found to be discourse units, that is, bounded linguistic units one level higher than the sentential level, and have all been described within the same formal theory. It should be noted that we do not claim that the structures found in these discourse units is entirely independent of structure of the semantic domain, because of course the structure of the domain has some effect. 2. TREE TRANSFORMATIONS IN DISCOURSE PRODUCTION The discourse units mentioned above have all been found to be tree structured. This is a claim that any such discourse can be divided into parts such that there are significant relations of dominance among these parts. These trees can be viewed as being constructed by a sequence of transformations on an initial empty tree, with each transformation corresponding to an utterance by participants, which may add, delete, or move nodes of the tree. The sequence of transformations encodes the construction of the discourse as it actually proceeds in time. We now turn to a discussion of the discourse units which have been analysed according to this model. * Structural Semantics, P.O. Box 707, Palo Alto, California 94302. + SRI International, 333 Ravenswood Ave., Menlo Park, California 94025. 2.1 SPATIAL DESCRIPTIONS AS TOURS In an investigation of the description of spatial networks, speakers were asked to describe the layout of their apartment. The vast majority of speakers used a "tour strategy," which takes the hearer on an imaginary tour of the apartment, building up the description of the layout by successive mention of each room and its position. This tour forms a tree composed of the entry to the apartment as root with the rooms and their locations as nodes, and with an associated pointer indicating the current focus of attention, expressed by unstressed you. It might be argued that the tree structure of these descriptions is a consequence of the structure of apartments rather than of the structure of discourse. However, there are apartments which are not tree structured, because some rooms have more than one entrance, thus allowing multiple routes to the same point; but in their descriptions, speakers traverse only one route; that is, loops in the apartment are always cut in the descriptions. [ Thus, although some of the tree structure may be attributable to the physical structure being described, some of it is a consequence of the ease of expressing tree structures in language, and the difficulty of expressing graph structures. The tree structure of apartment descriptions is construc- ted using only addition transformations, and pointer movement transformations (called "pops" in tinde and Goguen (1978)) which bring the focus of attention back from a branch which has been traversed to the point of branching. The construction of the tree is entirely depth first. 2.2 SPATIAL DESCRIPTIONS AS MAPS In describing apartment layouts, there is a minority strategy, used by 4% of the speakers (3 out of 72 cases of the data of Linde (1974)) describing the layout in the form of a map. The speaker first describes the outside shape, then sketches the internal spatial divisions, and finally labels each internal division. This strategy can also be described as a tree construction, in this case, a breadth first traversal with the root being the outside shape, the internal divisions the next layer of nodes, and the names of these divisions the terminal nodes. Because there are so few example, it is not possible to give a detailed description of the rules for construction. 2.3 PLANNING We have argued that the structure of apartment layout descriptions is not entirely due to the structure of the semantic domain; however, a question remains as to whether it is the restriction to a limited domain which permits precise description. To investigate this, let us consider the Watergate transcripts, which offer a spectacularly unrestricted semantic domain, specifically those portions in which the president and his advisors engage in the activity of planning. (Linde and Goguen, 1978). Planning sessions form a discourse unit with "[ In more mathematical language, the linear sequence of rooms is the depth first traversal of a minimal spanning tree of the apartment graph. 35 discernable boundaries and a very precisely describable internal structure. Although we can not furnish any detailed description of the semantic domain, we can be extremely precise about the social activity of plan construction. Because the cases we have examined involve planning by a small group, the tree is not constructed exclusively by addition, as are the types discussed above. Deletion, substitution, and movement also occur, as a plan is criticised and altered by all members of the group. Z.4 EXPLANATION A discourse unit similar to planning is explanation. (Weiner, 1979; Goguen, Linde and Weiner to appear.) (By explanation we here include only the discourse unit of the form described below; we exclude discourse units such as narratives or question-response pairs which may socially serve the function of explanation.) Informally, explanation is that discourse unit which consists of a proposition to be demonstrated, and a structure of reasons, often multiply embedded reasons, which support it. The data of this study are accounts given of the choice to use the long or short income tax form, explanations of career choices, and material from the Watergate transcripts in which an evaluation is given of how likely a plan is to succeed, with complex reasons for this evaluation. Like apartment descriptions and small group plan- ning, explanation can be described as the transforma- tional construction of a tree structure. Since in the casesexauLined, a single person builds the explanation, there are no reconstructive transformations such as deletion or movement of subtrees; the transformations found are addition and pointer movement. Pointer movement is particularly com~lex in this discourse unit since explanation permits embedded alternate worlds, which require multiple pointers to be maintained. Explanation structure appears to be the same in the three different semantic domains, suggesting that the discourse structure is due to genral rules plus a particular social context, rather than being due to the structure of the semantic domain. 3. CRITERIA FOR EVALUATING DISCOURSE STRUC%~/RES The criticism might be made of these tree structures that an analyst can impose a tree structure on any discourse, without any proof that it is related to what the speaker himself was doing. We would claim that although we have, of course, no direct access to the cognitive processes of speakers, there are two related criteria for evaluating a proposed discourse structure. 3. I TEXT MARKING One criterion for Judging the relative naturalness of a particular analysis is the degree to which the text being analysed contains markers of the structure being postulated. Thus, we have some confidP-ce that the speaker himself is proceeding in terms of a branching structure when we find markers like "Row as you're coming into the front of the apartment, if you go straight rather than go right or left, you come into a large living room area," or "On the one hand, we could try ..." The opposite case would be a text in which the division postulated by an analyst on the basis of some a priori theory had no semantic or syntactic marking in the text. 3.2 FRUITFULNESS OF THE ANALYSIS A second criterion is whether some postulated structure is fruitful in generating further suggestions for how to explore the text. Thus, the tree analyses of apartment layout descriptions, planning, and explanation, give rise to questions such as how various physical layouts are turned into trees, how trees are ~raversed, the social consequences of particular transfoz~l~ions, the apparent psychological ease or difficulty of various transformations, the relation of discourse structure to syntactic structure, etc. (see Linde and Goguen, 1978) By contrast, an unfruitful analysis will give rise to few or no interesting research questions, and will not permit the analyst to investigate questions about the discourse unit which he or she has reason to believe are in,cresting. 4. GENERAL PRINCIPLES OF DISCOURSE STRUCTURE Given that ~hese postulated structures are useful models of what speakers do, we may ask how it is that speakers produce texts with these structures. It is known that children must learn to produce well-formed narratives. It might be hypothesized that each discourse unit must be separately learned, and that each has its own unrelated set of rules. However, there is evidence that there are verT general rules for discourse construction, which hold across discourse units, and which can be used ~o construct novel discourse units. The test case for such a hynuthesls is the production of a discourse unlt whlch is not a part of speakers' ordinary repetolre, but rather, is made up for the occasion of the experiment. Such an experiment was performed by asking people to describe the process of getting themaelves and their husbands and children off to work in the morning. (Linde, in preparation) These "morning routines" are typically well-structured and regular; everyone appears to do them the same way. We know that the speakers had never produced such discourses before, since we never in ordinary discourse hear such extended discussions of the details of daily llfe. (Even bores have their limits.) Therefore, the regularities must be the product of the intersection of a particular real world domain, in this case, multiple parallel activities, with very general rules for discourse construction. 2 4.1 META-RULES OF DISCOURSE STRUCTURE We are by no means ready to offer a single general theory of discourse structure; that must wait until a sufficiently large number of discourse types has been investigated in detail. However, the following rules have been observed in two or more discourse units, and it is rules of this type that we would llke to investl- gate in other discourse units. [. The most frequent subordinator for a given discourse unit will have the most minimal marking in the text, most frequently being marked with lexical and. Moreover, it will not be necessary to establish this node before beginning the first branch, but only when the return to the branch point is effected. 2. All other node types which subordinate two or more branches, such as exclusive or or conditional, must be indicated by markers in the text before the first branch is begun. 3. Depth-first traversal is the most usual strategy. 4. Pop markers are available to indicate return to a branch point or higher node; it is never necessary to recapitulate in reverse the entire traversal of a branch. Z This is interesting for the light which it sheds on natural structures for the description of concurrent activities. 36 5. CONCLUSIONS The reason for being interested in regularities of • discourse structure, particularly regularities which hold across a number of discourse types, is that they suggest universals of what is often called "mind," and, more practically, they also suggest features which might be part of systems for language understanding and production. Indeed Welner (to appear) has constructed a system for the production of explanations of U.S. income tax law based on the transformational theory of explanation discussed in section 2.4. There is, moreover, the possibility of designing meta-systems, which might be programmed to handle a variety of discourse types. ACKNOWLEDGEMENTS We would like to thank R. M. Burstall and James Weiner for their help throughout much of the work reported in this paper. We owe our approach to discourse analysis to the work of William Labor, and our basic orientation to Chogyam Trungpa, Rinp~che. REFERF.NCES Chafe, Wallace, 1974. Language and Consciousness. Language. Vol. 50, 111-133. Goguen, J.A., Charlotte Linde, and James Weiner. to appear. The Structure of Natural Explanation. Grosz, Barbara J. 1977. The Representation and Use 9f Focus in Dialogue Understanding. SRI Technical Note 151. Hal~iday, M.A.K. and Ruqaiya N. Hasan, 1976. Cohesion in English, Longman, London. Labor, William, 1972. The Transformation of Experience into Narrative Syntax, in Language in the Inner City, Philadelphia, University of Pennsylvania Press. Linde, Charlotte, 1974. The Linguistic Encoding of Spatial Information. Columbia University, Department of Linguistics dissertation. Linde, Charlotte, 1979. Focus of Attention and the Choice of Pronouns in Discourse, in Syntax and Sem~ntlcs, Vol.12 Discourse and Syntax~ ed. Talmy Givon, Academic Press, New York. Li~de, Charlotte, 1980a. The Organization of Discourse, in The English Language in its Social and Historical Context ed. Timothy Shopen, Ann Zwlcky and Peg Griffen, Winthrop Press, Cambridge, Massachusetts. Linde Charlotte, 1980b. The Life Story: A Temporally Discontinuous Discourse Type, in Papers From the Kassel Workshop on Psycholingulstic Models of Production. Linde, Charlotte, in preparation. The Discourse Structure of the Description of Concurrent Activity. Linde, Charlotte and J.A. Goguen, 1978. The Structure of Planning Discourse, Journal of Social and Biological Structures, Vol. [, 219-251. Linde, Charlotte and William Labor, 1975. Spatial Networks as a Site for the Study of Language and Thought, Language , Vol. 51, 924-939. Polanyi, Livia, 1978. The American Story. University of Michigan Department of Linguistics dissertation. Weiner, James, 1979. The Str~cture of Natural Explanation: Theory and Application. System Development Corporation, SP-4035. Welner, J. BLAH: A System Which Explains its Reasoning, tO appear in Artificial Intelli~ence. 37
1980
10
The Parameters of Conversational Style Deborah Tannen Georgetown University There are several dimensions along which verbalization responds to context, resulting in individual and social differences in conversational style. Style, as I use the term, is not something extra added on, like decora- tion. Anything that is said must be said in some way; co-occurrence expectations of that "way" constitute style. The dimensions of style I will discuss are: I. Fixity vs. novelty 2. Cohesiveness vs. expressiveness 3. Focus on content vs. interpersonal involvement. Fixity vs. novelty Any utterance or sequence must be identified (rightly or wrongly, in terms of interlocuter's intentions) with a recognizable frame, as it conforms more or less to a familiar pattern. Every utterance and interaction is formulaic, or conventionalized, to some degree. There is a continuum of formulaicness from utterly fixed strings of words (situational formulas: "Happy birth- day," "Welcome home," "Gezundheit") and strings of events (rituals), to new ideas and acts put together in a new way. Of course, the latter does not exist except as an idealization. Even the most novel utterance is to some extent formulaic, as it must use familiar words (witness the absurdity of Humpty Dumpty's assertion that when he uses a word it means whatever he wants it to mean, and notice that he chooses to exercise this li- cense with only one word); syntax (again Lewis Carroll is instructive: the "comprehensibility" of Jabberwocky); intonation; coherence principles (cf Alton Becker); and content (Mills' "vocabularies of motives," e.g.). All these are limited by social convention. Familiarity with the patterns is necessary for the signalling of meaning both as prescribed and agreed upon, and as cued by departure from the pattern (cf Hymes). For example, a situational formula is a handy way to signal familiar meaning, but if the formula is not known the meaning may be lost entirely, as when a Greek says to an American cook, "Health to your hands." If mean- ing is not entirely lost, at least a level of resonance is lost, when reference is implicit to a fixed pattern which is unfamiliar to the interlocutor. For example, when living in Greece and discussing the merits of buy- ing an icebox with a Greek Friend, I asked, "Doesn't the iceman cometh?" After giggling alone in the face of his puzzled look, I ended up feeling I hadn't communicated at all. Indeed I hadn't. Cohesiveness vs. expressiveness This is the basic linguistic concept of markedness and is in a sense another facet of the above distinction. What is prescribed by the pattern for a given context, and what is furnished by the speaker for this instance? To what extent is language being used to signal "busi- ness as usual," as opposed to signalling, "Hey, look at this!" This distinction shows up on every level of verbalization too: lexical choice, pitch and amplitude, prosody, content, genre, and so on. For example, if someone uses an expletive, is this a sign of intense anger or is it her/his usual way of talking? If they reveal a personal experience or feeling, is that evi- dence that you are a special friend, or do they talk that way to everybody? Is overlap a way of trying to take the floor away from you or is it their way of showing interest in what you're saying? Of course, ways of signalling special meaning -- expressiveness -- are also prescribed by cultural convention, as the work of John Gumperz shows. The need to distinguish between individual and social differences is thus intertwined with the need to distinguish between cohesive and ex- pressive intentions. One more example will be presented, based on spontaneous conversation taped during Thanks- • giving dinner, among native speakers of English from different ethnic and geographic backgrounds. In responding to stories and comments told by speakers from Los Angeles of Anglican/Irish background, speakers of New York Jewish background often uttered paralinguis- tically gross sounds and phrases ("WHAT!? .... How INTer- esting! .... You're KIDding! .... Ewwwwww!"). In this con- text, these "exaggerated" responses had the effect of stopping conversational flow. In contrast, when similar responses were uttered while listening to stories and comments by speakers of similar background, they had the effect of greasing the conversational wheels, encourag- ing conversation. Based on the rhythm and content of the speakers' talk, as well as their discussion during playback (i.e. listening to the tape afterwards), I could hypothesize that for the New Yorkers such "ex- pressive" responses are considered business as usual; an enthusiasm constraint is operating, whereby a certain amount of expressiveness is expected to show interest. It is a cohesive device, a conventionally accepted way of having conversation. In contrast, such responses were unexpected to the Californians and therefore were taken by them to signal, "Hold it! There's something wrong here." Consequently, they stopped and waited to find out what was wrong. Of course such differences have interesting implications for the ongoing interac- tion, but what is at issue here is the contrast between the cohesive and expressive use of the feature. Focus on content vs. interpersonal involvement Any utterance is at the same time a statementof content (Bateson's 'message') and a statement about the rela- tionship between interlocutors ('metamessage'). In other words, there is what I am saying, but also what it means that I am saying this in this way to this person at this time. In interaction, talk can recognize, more or less explicitly and more or less emphatically (these are different), the involvement between interlocutors. It has been suggested that the notion that meaning can stand alone, that only content is going on, is associa- ted with literacy, with printed text. But certainly relative focus on content or on interpersonal involve- ment can be found in either written or spoken Form. I suspect, for example, that one of the reasons many people find interaction at scholarly conferences difficult and stressful is the conventional recognition of only the content level, whereas in fact there is a lot of involve- merit among people and between the people and the content. Whereas the asking of a question following a paper is conventionally a matter of exchange of information, in fact it is also a matter of presentation of self, as Goffman has demonstrated for all forms of behavior. A reverse, phenomenon has been articulated by Gall Drey- fuss. The reason many people feel uncomfortable, if not scornful, about encounter group talk and "psychobabble" is that it makes explicit information about relation- ships which people are used to signalling on the meta level. Relative focus on content gives rise to what Kay (1977) calls "autonomous" language, wherein maximal meaning is encoded lexically, as opposed to signalling it through use of paralinguistic and nonlinguistic channels, and wherein maximal background information is furnished, as opposed to assuming it is already known as a consequence of sharedexperience. Of course this is an idealization as well, as no meaning at all could be communicated if 39 there were no common experience, as Fillmore (197g) amply demonstrates. It ~s crucial, then, to know the operative conventions. As much of my own early work shows, a hint {i.e. indirect communication) can be miss- ed if a listener is unaware that the speaker defines the context as one in which hints are appropriate. What is intended as relatively direct communication can be ta- ken to mean f r more, or simply other, than what is meanS if the listener is unaware that the speaker de- fines the context as one'in which hints are inappropri- ate. A common example seems to be communication between intimates in which one partner, typically the female, assumes, "We know each other so well that you will know what I mean without my saying it outright; all I need do is hint"; while the other partner, typically the male, assumes, "We know each other so well that you will tell me what you want." Furthermore, there are various ways of honoring inter- ~ersonal involvement, as service of two overriding hu- man goals. These have been called, by Brown and Levin- son (1978}, positive and negative politeness, building on R. Lakoff's stylistic continuum from camaraderie to distance (1973) and Goffman's presentational and avoid- ance rituals (1967). These and other schemata recog- nize the universal human needs to l) be connected to other people and 2) be left alone. Put another way, there are universal, simultaneous, and conflicting hu- man needs for community and independence. Linguistic choices reflect service of one or the other of these needs in various ways. The paralinguistically gross listener responses mentioned above are features in an array of devices which I have hypothesized place the signalling load (Gumperz' term) on the need for commu- nity. Other features co-occurring in the speech of many speakers of this style include fast rate of speech; fast turn-taking; preference for simultaneous speech; ten- dency to introduce new topics without testing the con- versational waters through hesitation and other signals; persistence in introducing topics not picked up by oth- ers; storytelling; preference for stories told about personal experience and revealing emotional reaction of teller;'talk about personal matters; overstatement for effect. (All of these features surfaced in the setting of a casual conversation at dinner; it would be pre- mature to generalize for other settings). These and other features of the speech of the New Yorkers some- times struck the Californians present as imposing, hence failing to honor their need for independence. The use of contrasting devices by the Californians led to the impression on some of the New Yorkers that they were deficient in honoring the need for community. Of course the underlying goals were not conceptualized by partici- pants at the time. What was perceived was sensed as personality characteristics: "They're dominating," and "They're cold." Conversely, when style was shared, the conclusion was, "They're nice." Perhaps many of these stylistic differences come down to differing attitudes toward silence. I suggest that the fast-talking style I have characterized above grows out of a desire to avoid silence, which has a negative value. Put another way, the unmarked meaning of silence, in this system, is evidence of lack of rapport. To other speakers -- for example, Athabaskan Indians, according to Basso (1972) and Scollon (1980) -- the unmarked mean- ing of silence is positive. Individual and social differences All of these parameters are intended to suggest pro- cesses that operate in signalling meaning in conversa- tion. Analys'is of cross-cultural differences is useful to make apparent processes that go unnoticed when sig- nalling systems are shared. An obvious question, one that has been indirectly addressed throughout the present discussion, confronts the distinction between individual and cultural differ- ences. We need to know, for the understanding of our own lives as much as for our theoretical understanding of discourse, how much of any speaker's style -- the linguistic and paralinguistic devices signal)ing meaning -- are prescribed by the culture, and which are chosen freely. The answer to this seems to resemble, one level further removed, the distinction between cohesive vs. expressive features. The answer, furthermore, must lie somewhere between fixity and novelty -- a matter of choices among alternatives offered by cultural convention. References Basso, K. 1972. To give up on words: Silence in Western Apache culture, in P.P. Giglioli, ed., Language in social context. Penguin. Brown, P. & S. Levinson. 1978. Universals in language usage: Politeness phenomena, in E. Goody, ed., Ques- tions and politeness. Cambridge. Fillmore, C. 1979. Innocence: A second idealization for linguistics, Proceedings of the fifth annual meeting of the Berkeley Linguistics Society. Goffmen, E. 1967. Interaction rttual. Doubleday. Kay, P. 1977. Language evolution and speech style, in B. Blount & M. Sanches, eds., Sociocultural dimensions of language change. NY: Academic. Lakoff, R. 1973. The logic of politeness, or minding your p's and q's. Papers from the ninth regional meeting of the Chicago Linguistics Society. Scollon, R. 1980. The machine stops: Silence in the metaphor of malfunction. Paper prepared for the A~er- ican Anthropological Association annual meeting. 40
1980
11
PHRASE STRUCTURE TREES BEAR MORE FRUIT THAN YOU WOULD HAVE THOUGHT* Aravind K. Joshi and Leon "S." Levy Department of Computer and Bell Telephone Laboratories Information Science Whippany, NJ 07981 The Moore School/D2 University of Permsylvania Philadelphia, PA 1910B EXTENDED ABSTRACT** There is renewed interest in examining the descriptive as well as generative power of phrase s~-~uctur~ gram- mars. The primary motivation has come from the recent investigations in alternatives to t-~ansfor~ational gremmmrs [e.g., i, 2, 3, 4]. We will present several results and ideas related to phrase structure trees which have significant relevance to computational lin- guistics. We %~_nT to accomplish several objectives in this paper. I. We will give a hrief survey of some recent results and approaches by various investigators including, of course, our own work~ indicating their interr~laticn- ships. Here we will review the work related to the notion of node admissibility starring with Chomsky) followed by the work by McCawley, Peters and Ritchie, Joshi and Levy, a~d more recent work of Gazdar. We will also discuss other amendments to context-free grammars which increase the descriptive power but not the generative power. In particular, we will discuss the notion of categories with holes as recently intro- duced by Gazdam [3]. There is an interesting history behind this notion. Sage~'s parser explieitly exploits such a convention and, in fact, uses it to do some co- ordinate st-ructnK-a computation. We suspect that some other parsers have this feature also, perhaps ~plicit- ly. We will discuss this matter, which obviously is of great interes~ to computational linguists. 2. Our work on local constraints on st-r~/cin/ral descrip- tions, [5, 6], which is ccmputationally relevant both to linguistics and programming language theory, has art-~'acted some attention recently; however, the demon- srration of these results has re~.ained somewhat inac- cessible to many due to the technicalities of the tree automata theory. Recently, we have found a way of providing an intuitive explanation of these results in terms of intel"acting finite state machines (of the , usual kind). Besides providing an intuitive and a more transparent explanation of our results, this approach is computationally more interesting and allows us to formulate an interesting question: How large a variable set (i.e., the set of nonterminals) is required for a phrase slx~cture grammar or how much information does a nontermdmal encode? We will present this new approach. 3. We will present some new results which extend the "po~er" of local constraints without affecting the chax~ acter of earlier results. In particular, we will show That local constraints can include, besides the pmope~ analysis (PA) predicates and domination (~) pmadicates, * This work was partially supported by NSF grant MCS79- 08401. ** Full paper will be available at the time of the meeting. mor~ complex predicates of the following form. (1) (PRED N 1 N 2 ... Nn) where N I, N2, ... N n are nonterminals mentioned in the PA and/or ~ constraint of the rule in which (i) appears and PR~ is a predicate which, r~ughly speaking, checks fo~ certain domination or left-of (or right-of) rela- Tionships among its arguments. Two examples of inTer~ est are as follows. (2) (CCOFMAND A B C) CC0~LND holds if B immediately dominates A and B domi- nates C, not necessarily ~iately. Usually the B node is an S node. (3) (LEFTMOSTSISTER A B) LEFTMOSTSISTER holds if A is the leftmost sister of B. We will show that introduction of predicates of the type (I) do not change the character of our result on local cons~-raints. This extension of our earlier work has relevance to the forTm~ation of some long distance rules without %-mansformations (as well as without the use of The categories with holes as suggested by Gazdar). We will discuss some of the processing as well as lin- guistic relevance of these results. 4. We will tr~y to compare (at least along two dimen- sions) the local const-raint approach to that of Gazdar's (specifically his use of categories with holes) and to that of Peters' use of linked nodes (as presented orally at Stanford recently). The dimensions for cc~ison would be (a) economy of representation, (b) proliferation of categories, by and large semantically vacuous, and (c) computational rele- vance of (a) and (b) above. 5. Co~positional semantics [8] is usually context-free, i.e., if nodes B and C are immediate descendants of node A, then the semantics of A is a composition (de- fined appropriately) of the semantics of B and semantics of C. Semantics of A depends only on nodes B and C and not on any other part of the st-ruerural description in which A may appear. Our method of local constraints (and to sQme extent Peters' use of linked nodes) opens the possibility of defining the semantics of A not only in terms of the semantics of B and C, but also in terms of sc~e parts of the sZ~-uc~ description in which A appears. In this sense, the semantics will be contex-t- sensitive. We have achieved some success with This aFpLuaeh to the semantics of progr~g languages. We will discuss some of ou~ preliminary ideas for extending this approach to natural language, in particular, in specifying scopes for variable binding. 6. While developing our theory of local constrains and some other related work, we have discovered that it is possible to characterize structural descriptions (for phrase sl-r~crure gz%m~mars) entirely in terms of trees without any labels, i.e., trees which capture the group- ing structure wi~hou~ the syntactic categories (which is the same as the constitn/ent st-r~cture without the node labels [7]. This is a surprising result. This result 41 provides a way of deter~ how much " ~ " ~zerm/nels (syntactic cazeEories) encode and there- fore clearly, it has ca~aticnal si~icance. Moreover, ~o The extent That The cla/m ~ha~ natural languages ere conzex~-bree is valid, this result has significant z~levancs to leamabili~y ~]~eories, because our result suEges~s that it might be possible to "infer" a phrase s~ruc'~r,e ~ L,-,, jus~ the grouping s~ruc~ure of ~he input (i.e., j us~ phrase boundaries). Pur~her, the set of descrip~iuns wit.bout labels are directly rela~ed to the ~ descz'ip~ic~s of a context-free Eramn~z-; hence, we may be able to specify '~aTural" syntactic categories. In summery, we will prese~1: a selectian of mathematical resul:s which have sisnifj~lnt z~l.evancs to m=~y aspec~ of con~tional lin~is~ics. SELECTED R ~ 2 ~ [I] Bresnan, J.W., '~vidence for an unbounded T/leory of ~z~nsformations," ki~ic Analysis, Vol. 2, 1976. [2] Gezdar, .G.J.M., "Phrase s-~,%~-%n0z~ grammar," to appear zn The Nal-ure of S},nr.actic Representation, (eds. P. Jacobscn and G.K. Pu/_Itm~), 1979. [3] Sazdar, G.J.M., " I ~ as a eont~cee language," unpublished ms., 1978. [~] Gazdar, G.J.M., "Unbounded dependencies and c'o- ordinate s~I-ocrure," unpublished ms. 1979. [5] Joshi, A.K. and Levy, L.S., "Local ~,~msforma- 1:ions," SIAM Journal of Com~inK, 1977. [6] Joshi, A.K., Levy, L.S., and Yueh, K., "Local ~ t s in uhe syntax and semantics of ~ing ~ , " to appear in Journal of Theoretical Cc~er Science, 1980. ~] Levy, L.S. and Joshi, A.K., "Skeletal descriptions," Information and Control, Nov. ig78. ~] Knuth, D.E., "Semantics of context-free ~ , " Mar.hem~%-ica.l Systems Theory, 1968. [9] Sager, N., "$ynr.ac~ic analysis of narura.l lan- &,~a~es," in Advances in Cc, mpuzers (eds. M. AI~ and M. Rub~ff)~ ~l. 8, Academic Press, New York, 1967.
1980
12
CAPTURING LINGUISTIC GENERALIZATIONS WITH METARULES IN AN ANNOTATED PHRASE-STRUCTURE GRAMMAR Kurt Konolige SRI International = 1. Introduction Computational models employed by current natural language understanding systems rely on phrase-structure representations of syntax. Whether implemented as augmented transition nets, BNF grammars, annotated phrase-structure grammars, or similar methods, a phrase-structure representation makes the parsing problem computatlonally tractable [7]. However, phrase-structure representations have been open to the criticism that they do not capture linguistic generalizations that are easily expressed in transformational grammars. This paper describes a formalism for specifying syntactic and semantic generalizations across the rules of a phrase-structure grammar (PSG). The formalism consists of two parts: 1. A declarative description of basic syntactic phrase-structures and their associated semantic translation. 2. A set of metarules for deriving additional grammar rules from the basic set. Since metarules operate on grammar rules rather than phrase markers, the transformational effect of metarules can be pro-computed before the grammar is used to analyze input, The computational efficiency of a phrase-structure grammar is thus preserved, Metarule formulations for PSGs have recently received increased attention in the linguistics literature, especially in [4], which greatly influenced the formalism presented in this paper. Our formalism differs significantly from [4] in that the metarules work on a phrase-structure grammar annotated with arbitrary feature sets (Annotated Phrase-structure Grammar, or APSG [7]). Grammars for a large subset of English have been written using this formalism [9], and its computational viability has been demonstrated [6]. Because of the increased structural complexity of APSGs over PSGs without annotations, new techniques for applying metarules to these structures are developed in this paper, and the notion of a match between a metarule and a grammar rule is carefully defined. The formalism has been implemented as a computer program and preliminary tests have been made to establish its validity and effectiveness. 2. M etarules Metarules are used to capture linguistic generalizations that are not readily expressed in the phrase-structure rules. Consider the two sentences: 1, John gave a book to Mary 2. Mary was given a hook by John Although their syntactic structure is different, these two sentences have many elements in common. In particular, the predicate/argument structure they describe is the same: the gift of a book by john to Mary. Transformational grammars capture this correspondence by transforming the phrase marker =This research was supported by the Defense Advanced Research Projects Agency under Contract N00039-79-C-0118 with the Naval Electronics Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted as representative of the official policies, either expressed or implied, of the U.S. Government. The author is grateful to Jane Robinson and Gary Hendrix for comments on an earlier draft of this paper. for (1) into the phrase marker for (2). The underlying predicate/argument structure remains the same, but the surface realization changes. However, the recognition of transformational grammars is a very difficult computational problem. = By contrast, metarules operate directly on the rules of a PSG to produce more rules for that grammar. As long as the number of derived rules is finite, the resulting set of rules is still a PSG, Unlike transformational grammars. PSGs have efficient algorithms for parsing [3]. In a sense, all of the work of transformations has been pushed off into a pre-processing phase where new grammar rules are derived. We are not greatly concerned with efficiency in pre-processing, because it only has to be done once. There are still computationa! limitations on PSGs that must be taken into account by any metarule system. Large numbers of phrase-structure rules can seriously degrade the performance of a parser, both in terms of its running time == , storage for the rules, and the ambiguity of the resulting parses [6]. Moreover, the generation of large numbers of rules seems psychologically implausible. Thus the two criteria we will use to judge the efficacy of metarules will be: can they adequately capture linguistic generalizations, and are they ¢omputationally practicable in terms of the number of rules they generate. The formalism of [4] is especially vulnerable to criticism on the latter point, since it generates large numbers of new rules. *== 3. Representation An annotated phrase-structure grammar (APSG) as developed in [7] is the target representation for the metarules. The core component of an APSG is a set of context-free phrase-structure rules. As is customary, these rules are input to a context-free parser to analyze a string, producing a phrase-structure tree as output. In addition, the parse tree so produced may have arbitrary feature sets, called annotations, appended to each node. The annotations are an efficient means of incorporating additional information into the parse tree. Typically, features will exist for syntactic processing (e.g., number agreement), grammatical function of constituents (e.g., subject, direct and indirect objects), and semantic interpretation. Associated with each rule of the grammar are procedures for operating on feature sets of the phrase markers the rule constructs. These procedures may constrain the application of the rule by testing features on candidate constituents, or add information to the structure created by the rule, based on the features of its constituents. Rule procedures are written in the programming language LISP, giving the grammar the power to recognize class 0 languages. The use of arbitrary procedures and feature set annotations makes APSGs an *There has been some success in restricting the power of transformational grammars sufficiently to allow a recognizer to be built; see [8]. =*Shell [10] has shown that, for a simple recursive descent parsing algorithm, running time is a linear function of the number of rules. For other parsing schemes, the relationship between the number of rules and parsing time is unclear. ='~SThis is without considering infinite schemas such as the one for coniunction reduction. Basically, the problem is that the formalism of [4] allows complex features [21 to define new categories, generating an exponential number of categories (and hence rules) with respect to the number of features. 4.3 extremely powerful and compact for-alism for representing a language, similar to the earlier ATN formalisms [1]. An example of how an APSG can encode a large subset of English is the DIAGRAM grammar [9]. It is unfortunately the very power .of APSGs (and ATNs) that makes it difficult to capture linguistic generalizations within these formalisms. Metarules for transforming one annotated phrase-structure rule into another must not only transform the phrase-structure, but also the procedures that operate on feature sets, in an appropriate way. Because the transformation of procedures is notoriously difficult,* one of the tasks of this paper will be to illustrate a declarative notation describing operations on feature sets that is powerful enough to encode the manipulations of features necessary for the grammar, but is still simple enough for metarulos to transform. 4. Notation Every rule of the APSG has three parts: 1. A phrase-structure rule; 2. A restriction set (RSET) that restricts the applicability of the rule, and 3. An assignment set (ASET) that assigns values to features. The RSET and ASET manipulate features of the phrase marker analyzed by the rule; they are discussed below in detail. Phrase-structure rules are written as: CAT -> C 1 C 2 ... Cn where CAT is the dominating category of the phrase, and C 1 through C n are its immediate constituent categories. Terminal strings can be included in the rule by enclosing them in double quote marks. A feature set is associated with each node in the parse tree that is created when z string is analyzed by the grammar. Each feature has a name (a string of uppercase alphanumeric characters) and an associated value. The values a feature can take on (the domain of the feature) are, in general, arbitrary. One of the most useful domains is the set "÷,-,NIL", where Nil is the unmarked case; this domain corresponds ~ to the binary features used in [2). More complicated domains can be used; for example, a CASE feature might have as its domain the set of tuplos ~<1 SG>,<2 SG>,c3 SG>,<I PL>,<2 PL>,<3 PL>'~. Most interesting are those features whose domain is a phrase marker. Since phrase markers are just data structures that the parser creates, they can be assigned as the value of a feature. This technique is used to pass phrase markers to various parts of the tree to reflect the gr;llmmatical and semantic structure of the input; examples will be given in later sections. We adopt the following conventions in referring to features and their values: - Features are one-place functions that range over phrase markers constructed by the phrase-structure part of a grammar rule. The function is named by the feature name. - These functions are represented in prefix form, e.g., (CASE NP) refers to the CASE feature of the NP constituent of a phrase marker. In cases where there is more than one constituent with the same category name, they will be differentiated by a "~/" suffix, for example, VP-> V NP§I NP~2 *it is sometimes hard to even understand what it is that a procedure does, since it may involve recursion, side-effects, and other complications. has two NP constituents. -A phrase marker is assumed to have its immediate constituents as features under their category name, e.|., (N NP) refers to the N constituent of the NP. - Feature functions may be nested, e.g., (CASE (N NP)) refers tO the CASE feature of the N constituent of the NP phrase marker. For these nestings, we adopt the simpler notation (CASE N NP), which is assumed to be right-associative. -The value NIL always implies the unmarked case. At times it will be useful to consider features that are not explicitly attached to a phrase marker as being present with value NIL. -A constant term will be written with a preceding single quote mark, e.s. , tSG refers to the constant token SG. 4.1. Restrictions The RSET of a rule restricts the applicability of the rule by a predication on the features of its constituents. The phrase markers used as constituents must satisfy the predications in the RSET before they will he analyzed by the rule to create a new phrase marker. The most useful predicate is equality: a feature can take on only one particular value to be acceptable. For example, in the phrase structure rule: S -> NP VP number agreement could be enforced by the predication: (NBR NP) - {NBR VP) where NBR is a feature whose domain is SG,PL~.* This would restrict the NBR feature on NP to agree with that on VP before the S phrase was constructed. The economy of the APSG encoding is seen here: only a single phrase-structure rule is required. Also, the linguistic requirement that subjects and their verbs agree in number is enforced by a single statement, rather than being implicit in separate phrase structure rules, one for singular subject-verb combinations, another for plurals. Besides equality, there are only three additional predications: inequality (#), set membership (e) and set non-membership (It). The last two are useful in dealing with non-binary domains. As discussed in the next section, tight restrictions on predications are necessary if metarules are to be successful in transforming grammar rules. Whether these four predicates are adequate in descriptive power for the grammar we contemplate remains an open empirical question; we are currently accumulating evidence for their sufficiency by rewriting DIAGRAM using just those predicates. Restriction predications for a rule are collected in the RSET of that rule. All restrictions must hold for the rule to be applicable. As an illustration, consider the subcategorizatlon rule for dltransitlve verbs with prepositional objects (e.g.. eJohn gave a book to Mary"): VP -> V NP PP RSET: (TRANS V) = ~DI; (PREP V) : (PREP PP) The first restriction selects only verbs that are marked as dltransitive; the TRANS feature comes from the lexical entry of the verb. Dltransitiv verbs with prepositional arguments are always subcategorized cy the particular preposition used, e.g., "give a always uses Ire" for its prepositional argument. *How NP and VP categories could "inherit" the NBR feature from their N and V constituents is discussed in the next section. 44 The second predication restricts the preposition of the PP for a given verb. The PREP feature of the verb comes from its lexical entry, and must match the preposition of the PP phrase* 4.2. Assignments A rule will normally assign features to the dominating node of the phrase marker it constructs, based on the values of the constituents f features. For example, feature inheritance takes place in this way. Assume there is a feature NBR marking the syntactic number of nouns. Then the ASET of a rule for noun phrases might be: NP -> DET N ASET: (NBR NP) := (NBR N) This notation is somewhat non-standard; it says that the value of the NBR function on the NP phrase marker is to be the value of the NBR function of the N phrase marker. An interesting application of feature assignment is to describe the grammatical functions of noun phrases within a clause. Recall that the domain of features can be constituents themselves. Adding an ASET describing the grammatical function of its constituents to the ditransitive VP rule yields the following: VP -> V NP PP ASET: (DIROBJ VP) := (NP VP); (INDOBJ VP) := (NP PP). This ASET assigns the DIROBJ (direct object) feature of VP the value of the constituent NP. Slmilarly~ the value of INDOBJ (indirect object) is the NP constituent of the PP phrase. A rule may also assign feature values to the constituents of the phrase marker it constructs. Such assignments are context sensitive, because the values are based on the context in which the constituent Occurs.*" Again, the most interesting use of this technique is in assigning functional roles to constituents in particular phrases. Consider a rule for main clauses: S -> NP VP ASET: (SUBJ VP) := (NP S), The three features SUBJ, DIROBJ, and INDOBJ of the VP phrase marker will have as value the appropriate NP phrase markers, since the DIROBJ and INDOBJ features will be assigned to the VP phrase marker when it is constructed. Thus the grammatical function of the NPs has been identified by assigning features appropriately. Finally, note that the grammatical Functions were assigned to the VP phrase marker. By assembling all of the arguments at this level, it is possible to account for bounded deletion phenomenon that are lexically controlled. Consider subcategorization for Equi verbs, in which the subject of the main clause has been deleted from the infinitive complement ("John wants to gem): =Note that we are not considering here prepositional phrases that are essentially mesa-arguments to the verb, dealing with time, place, and the like. The prepositions used for mesa-arguments are much more variable, and usually depend on semantic considerations. "*The assignment of features to constituents presents some computational problems, since a context-free parser will no longer be sufficient to analyze strings. This was recognized in the original version of APSGs [7], and a two-pass parser was constructed that first uses the context-free component of the grammar to produce an initial parse tree, then adds the assignment of features in context. VP-> V INF ASET: (SUBJ INF) := (SUBJ'VP) Here the subject NP of the main clause has been passed down to the VP (by the S rule), which in turn passes it to the infinitive as its subject. Not all linguistic phenomenon can be formulated so easily with APSGs; in particular, APSGs have trouble describing unbounded deletion and conjunction reduction. Metarule formulations for the latter phenomena have been proposed in [5], and we will not deal with them here. 5. Metarules for APSGs Metarules consist of two parts: a match template with variables whose purpose is to match existing grammar rules; and an instantiatlon template that produces a new grammar rule by using the match template~s variable bindings after a successful match. Initially, a basic set of grammar rules is input; metarules derive new rules, which then can recursively be used as input to the metarules. When (if) the process halts, the new set of rules, together with the basic rules, comprises the grammar. We will use the following notation for metarules: MF => IF CSET: C1, C2, .. Cn where MF is a _matchln| form, IF is an instantiation form, and CSET is a set of predications. Both the MF and IF have the same form as grammar rules, but in addition, they can contain variables. When an MF is matched against a grammar rule, these variables are bound to different parts of the rule if the match succeeds. The IF is instantlated with these bindings to produce a new rule. To restrict the application of metarules, additional conditions on the variable bindings may be specified (CSET); these have the same form as the RSET of grammar rules, hut they can mention the variables matched by the MF. Metarules may be classified into three types: I. Introductory metarules, where the MF is empty (=> IF). These metarules introduce a class of grammar rules. 2. Deletion metarules, where the IF is empty (MF =>). These delete any derived grammar rules that they match. 3. Derivation metarules, where both MF and IF are present. These derive new grammar rules from old ones. There are linguistic generalizations that can he captured most perspicuously by each of the three forms. We will focus on derivation metarules here, since they are the most complicated. 6. Matching An important part of the derivation process is the definition of a match between a metarule matching form and a grammar rule. The matching problem is complicated by the presence of RSET and ASET predications in the grammar rules. Thus, it is helpful to define a match in terms of the phrase markers that will be admitted by the grammar rule and the MF. We will say that an MF matches a grammar rule just in case it admits at least those phrase markers admitted by the grammar rule. This definition of a match is sufficient to allow the formulation of matching algorithms for grammar rules complicated by annotations. We divide the matching process into two parts: matching phrase-structures, and matching feature sets. Both parts must succeed in order for the match to succeed. 45 6.1. Matching Phrase-structures For phrase-structures, the definition of i match can be replaced by a direct comparison of the phrase-structures of the MF and grammar rule. Variables in the MF phrase-structure are used to indicate Idofllt care a parts of the grammar rule phrase-structure, while constants must match exactly. SIn|le lower case letters are used for variables that must match single categories of the grammar rule. A typical MF might be: S ->.a VP which matches S -> NP VP with a=NP; S -> SB VP with IBSB; S-> 'IT' VP with aJ'IT'; etC. A variable that appears more than once in an MF must have the same binding for each occurrence for a match to be successful, e.$., VP -> V a a matches VP -> V NP NP with a=NP but not VP -> V NP PP Single letter variables must match a single category in a grammar rule. Double letter variables are used to match a number of consecutive Catllorils (including none) fR the rule. We have: VP -> V uu matching VP -> V with UUm(); VP -> V NP with uu"(NP); VP -> V NP PP with uuu(NP PP); etc. Note that double letter variables are bound to an ordered list of elements fTom ~he matched rule. Because of this characteristic, a~ MF with more thin one double letter variable may match t rule in several different ways: VP -> V uu vv matches VP -> V NP PP with uu'(), vvs(NP Pp); uu=(N P), vvm(PP ); uum(NP VP), vv-(). All of these are considered to be valid, independent matches. Double and single letter variables may be intermixed freely in an MF. While double letter variables match multiple categories In l phrase structure rule, string variables match parts of a category. String variables occur in both double and single letter varieties; as expected, the former match any number of consecutive characters, while the litter match sln|le characters. String variables are assumed when an MF category contains i mixture of upper and lower case characters, e.g.: Vt -> V NP~la NPuu matches VP -> V NP~I NP with a=1, uu=(); VP -> V NP/~I NP~2 with aal, uu=(# 2); etc. String variables are most useful for matching category names that may use the ~ convention. 6.2. Feature Matching So far variables have matched only the phrase-structure part of grammar rules, and not the feature annotations. For feature matching, we must return to the original definition of matching based on the admissibility of phrase markers. The RSET of a grammar rule is a closed formula involvlng the feature sees of the phrase marker constructed by the rule; let P stand for this formula. If P is true for a given phrase marker, then that phrase marker is accepted by the rule; if not, It ts rejected. Similarly, the RSET of a matching form is an open formula on the feature sets of the phrase marker; let R(xl,x2...Xn) stand for this formula, where the x I are the variables of the RSET. For the MF;s restrictions to match those of the grammar rule, we must be able to prove the formula: P => tea 1)(EX2)_.(EXn) R(xl,x2,-.Xn) That Is. whenever P admits a phrase marker, there exists some blndin| for R0s free variables that also admits the phrase marker. Now the importance of restricting the form of P and R can be seen. Proving that the above implication holds for general P and R can be a hard problem, requiring, for example, a resolution theorem prover. By restricting P and R to simple conjunctions of equalities, inequalities, and set membership predicates, the match between P and R can be performed by a simple and efficient algorithm. 6.3. Instanttation When a matarule matches a grammar rule, the CSET of the metaruia Is evaluated to see if the metaruie can indeed be applied. For example, the MF: VP-> "BE" xP CSET: x ~t 'V will match any rule for which x is not bound to V. When an MF matches a rule, and the CSET is satisfied, the Instantlatlon form of the metarule is used to produce i new rule. TN~ variables of the IF are instantiated with their values from the match, producing I new rule. In addition, restriction and assignment features that do not conflict with the IF's features are carried over from the rule that matched. This latter is a very handy property of the instanttation, since that is usually what the metarule writer desires. Consider metarule that derives the subject-aux inverted form of a main clause with a finite verb phrase: grammar rule: S -> NP AUX VP RSET: (NBR NP) = (NBR AUX); (FIN VP) = i+; metarule: S-> NP AUX VP S~N>-> AUX NP VP if features were not carried over during an instan.iation, the result of matching and Instantlating the metarule would be: SAI -> AUX NP VP This does not preserve number agreement, nor does it restrict the VP to being finite. Of course, the metarule could be rewritten to have the correct restrictions in the IF, but this would sharply curb the utility of the metarules, and lead to the proliferation of metaruies with slightly different RSETs. 46 7. An Example: Dative Movement and Passive We are now ready to give a short example of two met,rules for dative movement and passive transformations. The predicate/argument structure will be described by the feature PA, whose value is a list: (V NP 1 Np 2 ...) where V is the predicating verb, and the NPs are its arguments. The order of the arguments is significant, since: ("gave" "John" "a book" "Mary") <=> gift of a book by John to Mary 'gave" "John' "Mary m "a book') <=> ?? gift of Mary to a hook by John Adding the PA feature, the rule for ditransltlve verbs with prepositional objects becomes: VP -> V NP PP RSET: (TRANS V) = IDI; (PREP V) = (PREP PP); ASET: (PA VP) := '((V VP) (SUBJ VP)(NP VP)(NP PP)) The SUBJ feature is the subject NP passed down by the S rule. 7.1. Dative Movement In dative movement, the prepositional NP becomes a noun phrase next to the verb: 1. John gave a book to Mary => 2. John gave Mary a book The first object NP of (2) fills the same argument role as the prepositional NP of (1). Thus the dative movement met,rule can be formulated as follows: met.rule DATMOVE VP -> V uu PP ASET: (PA VP) := '( a b c (NP PP)) => VP -> V NP#D uu RSET: (DATIVE V) = t+; (PREP V) : NIL; ASET: (PA VP) := '(ab c (NP#D VP)) DATMOVE accepts VPs with a trailing prepositional argument, and moves the NP from that argument to just after the verb. The verb must be marked as accepting dative arguments, hence the DATIVE feature restriction in the RSET of the instantlation form. Also, since there is no longer a prepositional argument, the PREP feature of the VP doesn't have to match it. As for the predicate/argument structure, the NP#D constituent takes the place of the prepositional NP in the PA feature. DATMOVE can be applied to the dltransltlve VP rule to yield the dltransitive dative construction. The variable bindings are: uu = (NP); a : (v vP) b : (SUBJ vp); c : (NP VP}. Instantlating the IF then gives the dative construction: VP -> V NP#D NP RSET: (DATMOVE V) = r+; (TRANS V) = 'Dis ASET: (PA VP) := '(( V VP) (SUBJ VP) (NP VP) (Np~ID VP)) There are other grammar rules that dative movement will apply 47 to, for example, verbs with separable particles: Make up a story for me => Make me up a story. This is the reason the double-letter variable "uu' was used in DATMOVE. As long as the final constituent of a VP rule is a PP, DATMOVE can apply to yield a dative construction. 7.2. Passive In the passive transformation, the NP immediately following the verb is moved to subject position; the original subject moves to an age.rive BY-phrase: (1) John gave a book to Mary => (2) A book was given to Mary by John. A metarule for the passive transformation is: met.rule PASSIVE VP -> V NPuu vv ASET: (PA VP) :: ~(a (SUBJ VP) bb (NPuu VP) cc); => AP -> V PPL vv PP#A RSET: (PREP PP#A) = ~BY; ASET: (PA VP) :: '(a (NP PP#A) bb (SUBJ VP) cc). PASSIVE deletes the NP immediately following the verb, and adds a BY-prepositional phrase at the end. PPL is a past participle suffix for the verb. In the predicate/argum=nt structure, the BY-phrase NP substitutes for the original subject, while the new subject is used in place of the original object NP. Applying PASSIVE to the ditransittve rule yields: AP -> V PPL PP PP#A RSET: (TRANS V) = 'DIs (PREP V) = (PREP PP); ASET: (PA VP) := '((V VP) (NP PP#A) (SUBJ VP) (NP PP)); e.g.. "A book was given to Mary by John" will be analyzed by this rule to have a PA feature of ("givea mJohn~ na book" "Mary"), which is the same predicate/argument structure as the corresponding active sentence. PASSIVE can also apply to the rule generated by DATMOVE to yield the passive form of VpIs with dative objects: AP -> V PPL NP PP#A RSET: (DATMOVE V) = f+; (TRANS V) = 'DIs ASET: (PA VP) := '((V VP) (NP PP#A) {NP VP) (SUBJ VP)); e.g., "Mary was given a book by John". 8. Implementation A system has been designed and implemented to test the validity of this approach. It consists of a matcher/instantiator for met,rules, along with an iteration loop that applies all the met.rules on each cycle until no more new rules are generated. Met.rules fur verb subcategorization and finite and non-finite clause structures have been written and input to the system. We were especially concerned: - To check the perspicuity of metarules for describing significant fragments of English using the above representation for grammar rules. - To check that a reasonably small number of new grammar rules were generated by the metarules for these fragments. Both of these considerations are critical for the performance of natural language processing systems. Preliminary tests indicate that the system satisfies both these concerns; indeed, the metarules worked so well that they exposed gaps in a phrase-structure grammar that was painstakingly developed over a five year period and was thought to be reasonably complete for a large subset of English 19]. The number of derived rules generated was encouragingly small: Subcategorizatlon: 1 grammar rule 7 metarules -> 20 derived rules Clauses: 8 grammar rules 5 metarules => 25 derived rules 9. Conclusions Metarules, when adapted to work on an APSG representation, are a very powerful tool for specifying generalizations in the grammar. A great deal of care must be exercised in writing metarutes, because it is easy to state generalizations that do not actually hold. Also, the output of metarutes can be used again aS input to the metarules, and this often produces surprising results. Of course, language is complex, and it is to be expected that describing Its generalizations will also be a difficult task. The success of the metarule formulation in deriving a small number of new rules comes in part from the Increased definitional power of APSGs over ordinary PSGs. For example, number agreement and feature inheritance can be expressed simply by appropriate annotations in an APSG, but require metarules on PSGs. The definitional compactness of APSGs means that fewer metarules are needed, and hence fewer derived rules are generated. 3. 4. 5. 6. 7. 8, 9. 10. REFERENCES W. Woods, 'An Experimental Parsing System for Transition Network Grammars, ~ R. Rustin (ed.), Natural Lan~uase Processins, Prentice-Hall, Englewood Cliffs, New Jersey, 1973. N. Chomsky. Aspects of the Theory of 5.,yntax, MIT Press, Cambridge, Mass., 1965. J. Early, "An Efficient Context Free Parsing Algorithm," CAC_M, Vol. 13 (1970) 94-I02. Gerald Gazdar, 'English as a Context-Free Language" University of Sussex, (unpublished paper, April, 1979). Gerald Gazdar, "Unbounded Dependencies and Coordinate Structure' University of Sussex, (submitted to Inquiry, October, 1979). Kurt Konollge, 'A Framework for a Portable NL Interface to Large Data Bases, m Technical Note 197, Artificial Intelligence Center, SRI International, Menlo Park, California (October 1979). William H. Paxton, 'A Framework for Speech Understanding,' Technical Note 142, Artificial Intelligence Center, $RI international, Menlo Park, California (June 1977}. S.R. Petrtck, 'Automatic Syntactic and Semantic Analysis, e Proceedln|s of the Interdisciplinary Conference on Automated Text Processing, {November 1976). Jane Robinson, 'DIAGRAM: A Grammar for Dialogues.' Technical Note 20$, Artificial Intelligence Center, SRI International, Menlo Park, California {February 1980). B.A. Shell, 'Observations on Context-Free Parsing,' Statistical Methods in Linl|uistics, (1976). 48
1980
13
Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition Robert Cregar Berwick MIT Artificial Intelligence Laboratory, Cambridge, MA 1. Introduction: Constraints And Language Acquisition A principal goal of modern linguistics is to account for the apparently rapid and uniform acquisition of syntactic knowledge, given the relatively impoverished input that evidently serves as the basis for the induction of that knowledge - the so-called projection problem. At least since Chomsky, the usual response to the projection problem has been to characterize knowledge of language as a grammar, and then proceed by restricting so severely the class of grammars available for acquisition that the induction task is greatly simplified - perhaps trivialized. consistent with our lcnowledge of what language is and of which stages the child passes through in learning it." [2, page 218] In particular, ahhough the final psycholinguistic evidence is not yet in, children do not appear to receive negative evidence as a basis for the induction of syntactic rules. That is, they do not receive direct reinforcement for what is no_..~t a syntactically well-formed sentence (see Brown and Hanlon [3] and Newport, Gleitman, and Gleitman [4] for discussion). Á If syntactic acquisition can proceed using just positive examples, then it would seem completely unnecessary to move to any enrichment of the input data that is as yet unsupported by psycholinguistic evidence. 2 The work reported here describes an implemented LISP program that explicitly reproduces this methodological approach to acquisitio,~ - but in a computational setting. It asks: what constraints on a computational system are required to ensure the acquisition of syntactic knowledge, given relatively plausible restrictions on input examples (only positive data of limited complexity). The linguistic approach requires as the output of acquisition a representation of adult knowledge in the form of a grammar. In this research, an existing parser for English, Marcus' PARSIFAL [1], acts as the grammar. PARSIFAL divides neatly into two parts: an interpreter and the grammar rules that the interpreter executes. The grammar rules unwind the mapping between a surface string and an annotated surface structure representation of that string. In part this unraveling is carried out under the control of a base phrase structure component; the base rules direct some grammar rules to build canonically-ordered structure, while other grammar rules are used to detect deviations from canonical order. We mimic the acquisition process by fixing a stripped-down version of the PARSIFAL interpreter, thereby assuming an initial set of abilities (the basic PARSIFAL data structures, a lexicon, and a pair of context-flee rule schemas). The simple pattern-action grammar rules and the details of the base phrase structure rules are acquired in a rule-by-rule fashion by attempting to parse grammatical sentences with a degree of embedding of two or less. The acquisition process itself is quite straightforward. Presented with a grammatical sentence, the program attempts to parse it. If all goes well, the rules exist to handle the sentence, and nothing happens besides a successful parse. However, suppose that the program reaches a point in its attempt where no currently known grammar rules apply. At this point, an acquisition procedure is invoked that tries to construct a single new rule that does apply. If the procedure is successful, the new rule is saved; otherwise" the parse is stopped and the next input sentence read in. Finally, since the program is designed to glean most of its new rules from simple example sentences (of limited embedding), its developmental course is at least broadly comparable to what Pinker [2] calls a "developmental" criterion: simple abilities come first, :rod sophistication with syntax emerges only later. The first rules acquired handle simple" few-word sentences and expand the basic phrase structure for English. Later on, rules to deal with more sophisticated phrase structure, alterations of canonical word order, and embedded sentences can be acquired. If an input datum is too complex for the acquisition program to handle at its current stage of syntactic knowledge, it simply parses what it can, and ignores the rest. 2. Constraints Establish the Program's Success 2. I Current Status of the Acquisition Program To date, the accomplishments of the research are two-fold. First, from an engineering standpoint, the program succeeds admirably; starting with no grammar rules and just two base schema rules, the currently implemented version (dubbed LPARSIFAL) acquires from positive example sentences many of the grammar rules in a "core grammar" of English originally hand-written by .Marcus. The currently acquired rules are sufficient to parse simple declaratives, much of the English auxiliary system including auxiliary verb inversion, simple passives, simple wh.questions (e.g., Who did John kiss.'), imperatives, and negative adverbial preposing. Carrying acquisition one step further, by starting with a relatively restricted set of context-free base rule schemas - the X-bar system of Jackendoff [7] - the program can also easily induce the proper phrase structure rules for the language at hand. Acquired base rules include those for noun phrases, verb phrases, prepositional phrases, and a substantial part of the English auxiliary verb system. The decision to limit the program to restricted sorts of evidence for its acquisition of new rules - that is, positive data of only limited complexity - arises out of a commitment to develop the weakest possible acquisition procedure that can still successfully acquire syntactic rules. This co,nmitment in turn follows from the position (cogently stated by Pinker) that "any plausible theory of language learning will have to meet an unusually rich set of empirical conditions. The theory ... will have to be [. But clfildren might (and seem to) receive negative evidence for what i~ a ,~emantically well-formed ,~entence. See Brown and Hanlon [3]- 2. There is a another rea.,on for rejecting negative examples as inductive evidence: from farina| results first established by Gold [5], it is known that by pairing positive and negative example string.~ with the appropriate labels "grammaticaC and "ungrammatical" one can learn "almost any" language. Thus. enriching the input to admit negative evidence broadens the class of "l~'~ssibly learnable languages" enormously. (Explicit instruction and negative examples are often closely yoked. Compare the necessity for a benign teacher in Wlnston',~ blocks world learning program [6'j.) 49 Of course, many rules lie beyond the current program's reach. PARSIFAL employed dual mechanisms to distinguish Noun Phrase ;rod wh-moveznents: at present, LPARSIFAL has only a single device to handle all constituent movements. Lacking a distinguished facility to keep track of wh-movements, LPARSIFAL cannot acqt, ire the rules where these movements might interact with Noun Phrase movements. Current experiments with the system include adding the wh facility back into the domain of acquisition. Also, the present model cannot capture all "knowledge of language" in the sense ;ntended by generative grammarians. For example, since the weakest form of the acquisition procedure does not employ backup, the program cannot re-analyze "garden path" sentences and so deduce that they are grammatically well-formed) In part, this deficit arises because it is not perfectly clear to what extent knowledge of parsing encompasses al_! our knowledge about language. 4 2.2 Constraints and the Acquisition Program However, beyond the simple demonstration of what can and cannot be acquired, there is a second, more important accomplishment of the research. This is the demonstration that constraint is an essential element of the acquisition program's success. To ease the computational burden of acquiring grammar rules it was necessary to place certain constraints on the operation of the model, tightly restricting both the class of h.vpothesizable phrase structure rules and the class of possible gramlnar rules. The constraints on grammar rules fall into two rough groups: consteainrs o,x rule application and constraints on rule form. The constraints on rule application can be formulated as specific /oca/i O, principles that govern the operation of the parser and the acquisition procedure. Recall that in Marcus' PARSIFAL grammar rules consist of simple production rules of the form If <pattern> then <action>, where a pattern is a set of feature predicates that must be true of the current environment of the parse i,~ order for an action to be taken. Actions are the basic tree-building ol~raTions that construct the desired output, a (modified) annotated surface structure tree (in the sense of Fiengo [S] or Chomsky [9]). Adopting the operating principles of the original PARSIFAL, grammar rules can trigger only by successfully matching features of the (finite) local em@onment of the parse, an environment that includes a small, three-cell look-ahead buffer holding • "already-built constituents whose grammatical function is as yet 3. A related issue is that the current procedure do~ not acquire the PARSIFAL "diagnostic" grammar rules that exploit look.ahead. Typically, diagnostic rules us.- the specific features of lexical items far ahead in the Io~k-ahead buffer to decide between alternative courts of action. However. I~y extendih, the acqui~;tion procedure -- allowing it to re-analyze apparently "bad" ~ntences in a careful mode and adding the stipui;Jti,~n that more "specific" rules should take priority over more "general" rules (an c, ften-made assumption for production systems) -- one can begin to aecomodate the acquisition of diagnostic rules, and in fact provide a kind of developmental theory for such rules. Work testing this idea is underway. 4. In mo.,t too<lets, the string-to-structural description mapping implied by the directionality of parsing is not "neutral" with respect speakers and listeners. undecided (e.g., a noun phrase that is not yet known to be the subject of a sentence) or single words. It is Marcus' claim that the addition of the look-ahead buffer enables PARSIFAL to always correctly decide what to do next - at least for English. The parser uses the buffer to make discriminations that would otherwise appear to require backtracking. Marcus dubbed this "no bocktracking" stipulation the Determinism Hygothesis. The Determiqism Hypothesis crucially entails that all structure the parser builds is correct - that already-executed grammar rules have performed correctly. This fact provides the key to easy acquisition: if parsing runs into trouble, the difficulty can be pinpointed as the current locus of parsing, and no_._tt with any already-built structure (previously executed grammar rules). In brief, any errors are assumed to be locally and immediately detectable. This constraint on error detectability appears to be a computational analogue of the restrictions on a transformational system advanced by Wexler and his colleagues. (see Culicover ;rod Wexler [I0]) In their independent but related formal mathematical modelling, they have proved that a finite error detectability restrict/on suffices to ensure the learnability of a tr;msformational grammar, a fact that might be taken as independent support for the basic design of LPARSIFAL. Turning now to constraints on rule form, it is easy to see that any such constraints wilt aid acquisition directly, by cutting down the space of rules that can be hypothesized. To introduce the constraints, we simply restrict the set of possible rule <patterns> and <actions>. The trigger patterns for PARSIFAL rules consist of just the items in the look-ahead buffer and a local (two node) portion of the parse tree under construction- five "cells" in all. Thus, patterns for acquired rules can be assumed to incorporate just five cells as well. As for actions, a major effort of this research was to demonstrate that just three or so basic operations are sufficient to construct the annotated surface structure parse tree, thus eliminating many of the grammar rule actions in the original PARSIFAL. Together, the restrictions on rule patterns and actions ensure that the set of rules available for hypothesis by the acquisition program is finite. The restrictions just described constrain the space of available gr:,mmnr rules. However, in the case of phrase structure rules :ldditional strictures are necessary to reduce the acquisitiona[ burden. LPARSIFAL depends heavily on the X.bar theory of phrase structure rules [7] to furnish the necessary constraints. In the X-bar theory, ,all phrase structure rules for human grammars are assu,ned to be expansions of just a few schemas of a rather specific form: for example, XP->...X ..... Here, the "X" stands for an oblig;,tory phrase structure category (such as a Noun, Verb, or Preposition): the ellipses represent slots for possible, but optional "XP" elements or specified grammatical formatives. Actual phrase structure rules ;sre fleshed out by setting the "X" to some known category and settling upon some way to fill out the ellipses. For example, by setting X=N(oun) and allowing some other "XP" to the left of the Noun (call it the category "Determiner") we would get one verson 3f a Noun Phrase rule, NP-->Determiner N . In this case, the problem for the learner must include figuring out what items are permitted to go in the slots on either side of the "N". Note that the XP schema tightly constrains the set of possible phrase structure rules; for instance, no rule of the form, XP-->X X would be admissible, immediately excluding such forms as, Noun Phrase->Noun Noun. It is this rich source of constraint that makes the 50 induction of the proper phrase structure from positive examples feasible; section 4 below illustrates how this induction method works in practice. Finally, it should be pointed out that the category names like "N" and "V" are just arbitrary labels for the "X" categories; the standard approach of X-bar theorists is to assume that the names st:md for bundles of distinctive features that do the actual work of classifying tokens into one category bin or another. All important area for future research will be to formulate precise models of how the feature system evolves in interaction with lexical and syntactic acquisition. This research completed so far assumes that the acquisition procedure is initially provided with just the X-bar schema described above along with an ability to categorize lexical items ;is noun.c, ~'erbs, or other. In .addition, the program has an initial schema for a well-formed predicate argument structure, namely, a predicate (verb) along with its "object" arguments. Other phrase structure categories such as Prepositional P/ware are inferred by noticing lexical items of unknown categorization and then insisting upon the constraint that only "XP" items or specified formatives appear before and after the main "X" entry. To take im over-simplified example, given the Noun Phrase the book behind the ~'indow, the presence of the non-Noun, non-Verb behind and the Noun Phrase lhe window immediately after the noun book would force creation of a new "X" category, since possible alternatives such as, NP->NP [the book] NP [behind...] are prohibited by the X-bar ban on directly adjacent, duplicate "X" items. The X-bar acquisition component of the acquisition procedure is still experimental, and so open to change. However, even crude use of the X-bar restrictions has been fruitful. For one thing, it enables the acquisition procedure to start without any pre-conceptions about canonical word order for the language at hand. This would seem essential if one is interested in the acquisition of phrase structure rules for languages whose canonical Subject-Verb-Object ordering is different from that of English. Ill addition, since so much of the acquisition of the category names is tied up with the elaboration of a distinctive feature system for lexical items, adoption of the X-bar theory appears to provide a driving wedge into the difficult problems of lexica[ acquisition and lexical ambiguity. To take but one example, the X-bar theory provides a framework for studying how items of one phrase structure category, e.g., verbs, can be converted into items of another category, e.g., nouns. This line of research is also currently ander investigation. 3. The Acquisition Algorithm is Simple As mentioned, LPARSIFAL proceeds by trying its hand at parsing a series of positive example sentences. Parsing normally operates by executing a series of tree-boilding and token-shifting grammar rule actions. These actions are triggered by matches of rule patterns against features of tokens in a small thtee-ceU constituent look-ahead buffer and the local part of the annotated surface structure tree currently under construction- the lowest, right-most edge of the parse tree. Grammar nile execution is also controlled by reference to base phrase structure rules. To implement this control, each of the parser's grammar rules are linked to one or more of the componeqts of the phrase structure rules. Then, grammar rules are defined to be eligible for triggering, or active, only if they are associ:tted with that p:lrt of the phrase structure which is the current locus of the parser's attentions; otherwise, a gramm;ir rule does not even have the opportunity to trigger against the buffer, and is inactive. This is best illustrated by an ex;tmple. Suppose there were but a single phrase structure rule for English, Sentence->NounPhrase VerbPhrase. Flow of control during a parse would travel left-to-right in accordance with the S--NP--VP order of this rule, and could activate and deactivate buqdles of grammar rules along the way. For example, if the parser had evidence to enter the S->NP VP phrase structure rule, pointers would first be set to its "S" and the "NP" portions. Then, all the grammar rules associated with "S" and "NP" would have a chance to run and possibly build a Noun Phrase constituent. The parser would eventually advance in order to construct a Verb Phrase, deactivating the Noun Phrase building grammar rules and activating any grammar rules :lssociated with the Verb Phrase. 5 Together with (1) the items in the buffer and (2) the leading edge of the parse tree under construction, the currently pointed-at portion of the phrase structure forms a triple that is called the current machine slate of the parser. If in the midst of a parse no currently known grammar rules can trigger, acquisition is initiated: LPARSIFAL attempts to construct a single new executable grammar rule. New rule assembly is straightforward. LPARSIFAL simply selects a new pattern and action, utilizing the current machine stale triple of the parser at the point of failure as the new pattern and one of four primitive (atomic) operations as the new action. The primitive operations are: attach the item in the left-most buffer cell to the node currently under construction; switch (exchange) the items in the first and second buffer cells; insert one of a finite number of lexical items into the first buffer cell; and insert a trace (an anaphoric-like NP) into the first buffer cell. The actions have turned out to be sufficient and mutually exclusive, so that there is little if any combinatorial problem of choosing among many alternative new grammar rule candidates. As a further constraint on the program's abilities, the acquisition procedure itself cannot be recursively invoked; that is, if in its attempt to build a single new executable grammar rule the program finds that it must acquire still other new rules, the current attempt at acquisition is immediately abandoned. This restriction has the apparently desirable effect of ensuring that the program use just local context to debug its new rules as well as ignore overly complicated example sentences that are beyond its reach. 5. This mherne w&.L first ,',uggested by Marcus [I. ~ge 60]. The actu~ procedure uses the X-bar ~hernas instead of explicitly labellad nodes like "Vl" or "S'. 51 In a pseudo-algorithmic form, the entire model looks like this: Step L Read in new (grammatical) example sentence. Step 2. Attempt to parse the sentence, using modified PARSIFAL parser. 2.1 Any phrase structure schema rules apply? 2.1.1 YES: Apply the rule; Go to Step 2.2 2.1.2 NO: Go to Step 2.2 2.2 Any grammar rules apply? (<pattern> of rule matches current parser state) 2.2.1 YES: apply rule <action>; (continue parse) Go to Step 2.1. 2.2.2 NO: no known rules apply; Parse finished? YES: (Get another sentence) Go to Step i. NO: parse is stuck Acquisition Procedure already Invoked? YES: (failure of parse or acquisition) Go m Step 3.4. or 3.2.3-4 NO: (Attempt acqumuon~ Go to Step 3. Step 3. Acquisition Procedure 3.1 Mark Acquisition Procedure as Invoked. 3.2 Attempt to construct new grammar rule 3.2.2 Try attach Success: (Save new rule) Go to Step 3.3 Failure: (Try next action) On to Step 3.2.3 3.2.3 Try to switch first and second buffer cell items. Success: (Save new rule) Go to Step 3.3. Failure:. (Restore buffer and try next action) Re-switch buffer cells; Go to Step 3.2.4 3.2.4 Try insert trace Success: (Save new rule) Go to Step 3.3. Failure: (End of acquisition) On to Step 3A. 3.3 (Successful a.cquisition) Store new rule; Go to Step 2.1. 3.,I (Failure of acquisition) 3A.1 (Optional phrase structure rule) Continue parse; Advance past current phrase structure component: Go to Step 2.1. 3.4.2 (Failure of parse) Stop parse; (30 to Step 1.. 4. Two Simple Scenarios 4.1 Phrase Structure for Verb Phrases To see exactly how the X-bar constraints can simplify the phrase stru~ure induction task, suppose that the learner has already acquired the phrase structure rule for sentences, i.e., something like, Sentence->Noun Phrase Verb Phrase, and now requires information to determir,, the proper expansion of a Verb phrase, Verb Phrase->..777. The X-bar theory cuts through the maze of possible expansions for the right-hand side of this rule. Assuming that Noun Phrases are the only other known category type, the X-bar theory then tells us is that these are the only possible configurations for a Verb Phrase rule: Verb Phrase->Noun Phrase Verb Verb Phrase->Verb Noun Phrase Verb Phrase->Noun Phrase Verb Noun Phrase If the learner can classify basic word tokens as either nouns or verbs, then by simply matching an example sentence such as John kissed Mary against the possible phrase structure expansions, the correct Verb Phrase rule can be qu;:kly deduced: $ 8 $ NP VP NP VP NP VP [ NP V ] V NP i NP V NP 1 ? ? I I I t ? ? ? d. kissed M. d, kissed M. d. kissed M. (N) (V) (N) Only one possible Verb Phrase rule expansion can successfully be matched against the sample string, Verb Phrase->Noun Phrase(NP)Verb(V) - exactly the right result for English. Although this is but a simple example, it illustrates how the phrase structure rules can be acquired on the basis of a process akin to "parameter setting"; given a highly constrained initial state, the desired final state can be obtained upon exposure to very simple triggering data. 4.2,4 Subject-Auxiliary Verb Inversion Rule Suppose that at a certain point LPARSIFAL has all the grammar rules and phrase structure rules sufficient to build a parse tree for John did kiss Mary. The program now must parse, Did John kiss Mary?. No currently known rule can fire, for all the rules in the phrase structure component activated at the beginning of a sentence will have a triggering pattern roughly like f=Aroun Phrase?][=i/erb?], but the input buffer will hold the pattern [Did: auxrerb, verbffJohn: Noun Phrase], and so thwart all attempts at triggering a grammar rule. A new rule must be written. Acting according to its acquisition procedure, the program first tries to attach the first item in the buffer, did, to the current active node, S(entence) as the Subject Noun Phrase. The attach fails because of category restrictions from the X-bar theory; as a kztown verb, did can't be attached as a Noun Phrase. But switch works, because when the first and second buffer positions are interchanged, the buffer now looks like [Johnffdid] Since the ability to parse declaratives such as John did kiss.., was assumed, an NP-attaching rule will now match. Recording its success, the program saves the switch rule along with the current buffer pattern as a trigger for remembering the context of auxiliary inversion. The rest of the sentence can now be parsed as if it were a declarative (the fact that a switch was performed is also permanently recorded at the appropriate place in the parse tree, so that a distinction between declarative and inverted sentence forms can be maintained for later "semantic" Ugh.) 5. Summary A simple procedure for the acquisition of syntactic knowledge has been presented, making crucial use of linguistically- and computationally-motivated constraints. Computationally, the system exploits the local and incremental approach of the Marcus parser to ensure that the search space for hypothesizabie new rules is finite and small. In addition, rule ordering information need not be explicitly acquired. That is, the system need not learn that, say, Rule A must obligatorily precede Rule B. Extrinsic ordering of this sort appears difficult (if not impossible) to attain under conditions of positive-only evidence. Third, the system acquires its complement of rules via the step-wise hypothesis of new rules. This ability to incrementally refine a set of grammar rules rests upon the incremental properties of the Marcus parser, which in turn might reflect the characteristics of the English language itself. 52 The constraints on the parser and acquisition procedure also parallel many recent proposals in the linguistic literature, lending considerable support to LPARSIFAL's design. Both the power and range of rule actions match those of constrained transformational systems; in this regard, one should compare the (independently) formalized transformational system of Lasnik and Kupin [I1] that ahnost point-for-point agrees with the restrictions on LPARSIFAL. Turning to other proposals, two of LPARSIFAL's rule actions, attach and switch, correspond to Emonds' [12] categories of structure-preserving and local (minor-movement) rules. A third, insert trace, is analagous to the more alpha rule of Chomsky [13]. Rule application is correspondingly restricted. The Culicover and Wexler Binary Principle (an independently discovered constraint akin to Chomsky's Subiacency Condition; see [10]) can be identified with the restriction of rule pattern-matching to a local radius about the current point of parse tree construction (eliminating rules that directly require unbounded complexity for refinement). The remaining Culicover and Wexler sufficiency conditions for learnability, including their Freezing and Ralsin~ Principles, are subsumed by LPARSIFAL's assumption of strict local operation and no backtracking (eliminating rules that permit the unbounded cascading of errors, and hence unbounded complexity for refinement). These striking parallels should not be taken - at least not immediately -- as a functional, "processing" explanation for the constraints on grammars uncovered by modern linguistics. An expl:mation of this sort would take computational issues as the basis for an "evaluation metric" of grammars, and then proceed to tells us why constraints are the way they are and not some other way. But this explanatory result does not necessarily follow from the identity of description between traditional transformational and LPARSIFAL accounts. Rather, LPARSIFAL ,night simply be translating the transformational constraints into a different medium - a computational one. Even more intriguing would be the finding that the constraints desirable from the standpoint of efficient parsing turn out to be exactly the constraints that ensure efficient acquisition. The current work with LPARSIFAL at least hints that this might be the case. However, at present the trade-off between the various kinds of "computational issues" as they enter into the evaluation metric is unknown ground; we simply do not yet know exactly what "counts" in the computational evaluation of grammars. ACKNOWLEDGE}4ENTS This article de,~rihes r~earch done at the Artificial Intelligence Laboratory of the M&,~sachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0643. The author is also deeply indebted to Milch Marcus. Only by starting with a higi~ly restricted parser could one even begin to consider the problem of acquiring the knowledge that such a par.',er embodies. The effort aimed at restricting the operation of PARSIFAL flows ¢s much from his thoughts in this direction as from the research into acquisition alone. REFERENCES ill Marcus, ,,H. A Theory of Syntactic Recognition for Natural Language. Cambridge, ,,VIA: HIT Press,, 1980. [2] Pinker. S. "Formal Models of Language Acquisition: Cognition, 7. 1979. pp. 217-283, [3] Brnwn. R.. and Hanlon, C., "Derivational Complexity and Order of Acquisition in Child Speech," in J.R. Hayes. ed, Cognition and the Development of Language, New York: John Wiley and Sons, 1970. [4] Newport, E. Gleitman, H, and Gleitman. I,.. "Hother. l'd Rather do it My,~elf: Some Effects and Non-effects of Maternal Speech Style: in C. Snow and C. Ferguson. Talking to Children. Input and Acquisition, New York: Cambridge University l're~s, i977. [5] Gold. E..M, "Language Identification in the Limit," Information and Control. 1O. 1967. pp. 447-474. [6] Winston. P.. "Learning Structural Descriptions from Examples," in P. Winston. editor, The Psychology of Computer Vision. New York: McGraw-Hill, 1975. [7] Jackendoff. R.. X-bar Syntax: A Study of Phrase Structure Cambridge. MA: MIT Press. 1977. [8] Fiengn. R, "On Trace Theory: Linguistic Inquiry. 8. no. 1. 1977. pp. 35-61. [9] Chomsky, N., "Conditions on Transformations," in S.R. Anderson and P. Kiparsky. (eds.). A Festschrift for Morris Halle, New York: HoR. Rinehart. and Winston, t973+ [10] Culicover. P. and Wexler. K, Formal Models of Language Acquisition. Cambridge. ,'VIA: MIT Press, 1980. [l 1] La.,nik, H. and Kupin. J. "A Restrictive Theory of Transformational Grammar." Theoretical Linguistics, 4. no. 3. 1977. pp. 173-196. [12] Emonds, J. A Transformational Approach to English Syntax. New York: Academic Press. 1q76. [13] Chomsky, N+ "On Wh-movement: in P. Culicover, T. Wasow, and A. Akmajian. Formal Syntax. New York: Academic Press. 1977. pp. 71-t32. 53
1980
14
A Linear-time Model of Language Production: some psychological implications (extended abstract) David D. McDonald MIT Artificial Intelligence Laboratory Cambridge, Massachusetts Traditional psycholinguistic studies of language production, using evidence from naturally occurring errors in speech [1][2] and from real-time studies of hesitations and reaction time [3][4] have resulted in models of the levels at which different linguistic units are represented and the constraints on their scope. This kind of evidence by itself, however, can tell us nothing about the character of the process that manipulates these units, as there are many a priori alternative computational devices that are equally capable of implementing the observed behavior. It will be the thesis of this paper that if principled, non- trivial models of the language production process are to be constructed, they must be informed by computationally motivated constraints. In particular. the design underlying the linguistic component I have developed ("MUMBLE .... previously reported in [5][6]) is being investigated as a candidate set of such constraints. Any computational theory of production that is to be interesting as a psycholinguistic model must meet certain minimal criteria: (1) Producing utterances incrementally, in their normal left-to-right order, and with a well- defined "point-of-no-return" since words once said can not be invisibly taken back~ (2) Making the transition from the non- linguistic "message"-level representation to the utterance via a linguistically structured buffer of only" limited size: people are not capable of linguistic precognition and can I. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defence under Office of Naval Research contract N00014-75-C-0643. 55 readily "talk themselves into a corner ''z (3) Grammatical robustness: people make very few grammatical errors as compared with lexical selection or planning errors ("false starts") [7]. Theories which incorporate these properties as an inevitable consequence of independently motivated structural properties will be more highly valued than those which only stipulate them. The design incorporated in MUMBLE has all of these properties~ they follow from two key intertwined stipulations--hypotheses--motivated by intrinsic differences in the kinds of decisions made during language production and by the need for an efficient representation of the information on which the decisions depend (see [8] for elaboration). (i) (~) The execution time of the process is linear in the number of elemenzs in ~he input message, i.e. the realization decision for each element is made only once and may not be revised. The representation for pending realization decisions and planned linguistic actions (the results of earlier decisions) is a surface-level syntactic phrase structure augmented by explicit labelings for its constituent positions (hereafter referred to as the tree). 3 This working-structure is used simultaniously for control (determining what action to take next), for specifying constraints (what choices of actions are Z. In addition, one inescapable conclusion of the research on speech-errors is that the linguistic representation(s) used during the production process must be capable of representing positions independently of the units (lexical or phonetic) that occupy them. This is a serious problem for ATN-b~sed theories of production since they have no representation for linguistic structures that is independent front their representation of the state of the process. 3. The leaves of this tree initially contain to-be-realized message elements. These are replaced by syntactic/lexical structures as the tree is refined in a top-down, left-to-right traversaL Words are produced as they are reached at (new) leaves, and grammatical actions are taken as directed by the annotation on the traversed regions. ruled out because of earlier decisions), for the representation of linguistic context, and for the implementation of actions motivated only by grammatical convention (e.g. agreement, word-ordar within the clause, morphological specializations; see [6]). The requirement of linear time rules out any decision-making techniques that would require arbitrary scanning of either message or tree. Its corollary, "Indelibility", 4 requires that message be realized incrementally according to the relative importance of the speaker's intentions. The paper will discuss how as a consequence of these properties decision-making is forced to take place within a kind of blinders: restrictions on the information available for decialon-making and on the possibtUtias for monitoring and for invisible self-repair, all describable in terms of the usual linguistic vocabulary. A further consequence is the adoption of a "lexicalist" position on transformations (see [9]), i.e. once a syntactic construction has been instantiated in the tree, the relative position of its constituents cannot be modified; therefore any "transformations" that apply must do so at the moment the construction is instantiatad and on the basis of only the information available at that time. This is because the tree is not buffer of objects, but a program of scheduled events. Noticed regularities in speech-errors have counter-parts in MUMBLE's design 5 which, to the extent that it is Independently motivated, may provide an explanation for them. One example is the 4. I.e. decisions are not subJeCt to backup-="they are ~rritten in indelible ink". This is also a property of Marcus's "deterministic" parser. It is intriguing to speculate that indelibility may be a key characteristic of psychologically plausible performance theories of natural language. 5. MUMBLE produces text. not speech. Consequently it has no Knowledge of syllable structure or intonation and can make no specific contribution= to the explanation of errors at that level. phenomena of combined-form errors: word-exchange errors where functional morphemes such as plural or tense are "stranded" at their ori~inal positions, e.g. "My locals are more variable than that." Intended- "...variables are more local" "Why don't we Eo to the 24hr. Star Marked and you can see my friend check in E cashes." Intended: "...cashing checks." One of the things to be explained about these errors is why the two classes of morphemes are distinguished-- why does the "exchanging mechanism" effect the one and not the other? The form of the answer to this question is generally agreed upon: two independent representations are being manipulated and the mechanism applies to only one of them. MUMBLE already employs two representations of roughly the correct distribution, namely the phrase structure tree (defining positions and grammatical properties) and the message (whose elements occupy the positions and prompt the selection of words). By incorporating specific evidence from speech-errors into MUMBLE's framework (such as whether the quantifier all participates in exchanges), it is possible to perform synthetic experiments to explore the impact of such a hypothesis on other aspects of the design. The interaction with psycholinguistios thus becomes a two-way street. The full paper 6 will develop the notion of a linear-time production process: how it is accomplished and the specific limitations that it imposes, and will explore its implications as a potential explanation for certain classes of speech-errors, certain hesitation and self-correction data. and certain linguistic constra_nts. 6. Regretably, the completion of this paper has been delayed in order for the author to give priority to his dissertatlon. 56 References [I] Garrett. M.F. (1979) "Levels of Processing in Sentence Production", in Butterworth ed. Language Production Volume I, Academic Press. [2] Shattuck Hufnagel, S. (1975) Speech Errors and Sentence Production Ph.D. Dissertation, Department of Psycholog~v, MIT. ['3] Ford. M. & Holmes V.M. (1978) "Planning units and syntax in sentence production", Cognition 6, 35- 63. ['4] Ford M. (1979) "Sentence Planning Units: Implications for the speaker's representation of meaningful relations underlying sentences", Occasional Paper 2, Center for Cognitive Science, MIT. ['5] McDonald, D,D. (1978) "Making subsequent references., syntactic and rhetorical constraints", TINLAP-g. University of Illinois. [6] (1978) "Language generation: Automatic Control of Grammatical Detail", COLING- 78. Bergen. Norway. ['7] Fay, D. (1977) "Transformational Errors". International Congress of Linguistics. Vienna, Austria. [8] McDonald D.D. (in preparation) Natural Language Production as a Process of Decision-making Under ConsU'alnt Ph.D. Dissertation, Department of Electrical Engineering and Computer Science, MIT. [9] Bresnan, J. (1978) "Toward a realistic theory of grammar", in Bresnan. Miller, & Halle ads. Linguistic Theory and Psychological Reality Mrr Press. 57
1980
15
PROBLEM SOLVING APPLIED TO LANGUAGE GENERATION Douglas I~: Appelt Stanford University, Stanfo,d, Califorlda SR I International 111enlo Park. California This research was supported at SRI htternational by the Defense Advanced Reseat~ch Projects Agency under contract N00039-79-C-0118 ~¢ith the Naval Electronic Systems Commaw t The views and conchtsions contained in this document are those of the author and should not be interpreted as representative of the official policiex either expressed or bnplied, of the Defense Advanced Research Projects Agency, or the U. S. Goverttment. The author is gratefid to Barbara Grosz, Gary ttendrix and Terry Winograd for comments on an earlier draa of this paper. I. Introduction Previous approaches to designing language understanding systems have considered language generation to be tile activity of a highly specialized linguistic facility that is largely indcpendcnt of other cognitive capabilities. All the requisite knowlcdge for gencration is embodicd in a "gcneration module" which, with appropriate modifications to the lexicon, is transportable bctween different domains and applications. Application programs construct "messages" in some internal representation, such as first order predicate calculus or scmantic nctworks, and hand them to the generation module to be translated into aatoral language. The application program decides what to say; the gencration module decides how to say it. In contrast with this previous work. this papcr proposes an approach to designing a language generation systcm that builds on the view of language as action which has cvolvcd from speech act theory (see Austin [2l and Scarle [11]). According to this vicw, linguistic actions are actions planncd to satisfy particular goals of the spcakcr, similar to other actions like moving and looking. Language production is integrated with a spcakcr's problcm solving processes. This approach is fi~unded on the hypothesis that planning and pcrforming linguistic ,actions is an activity that is not substantially different from planning and pcrforming othcr kinds of physical actions. The process of pro/lucing an uttcrance involves, planning actions to satisfy a numbcr of diffcrent kinds of goals, and then el~cicntly coordinating the actions that satisfy these goals. In the resulting framework, dlere is no distinction between deciding what to say and deciding how to say it. This rcsearch has procceded through a simultaneous, intcgrated effort in two areas. The first area of re.arch is the thcoretieal problcm of identifying the goals and actions that occur in human communication and then characterizing them in planning terms. The ~cond is the more applied task of developing machine--based planning methods that are adequate to form plans based on thc characterization dcveloped as part of the work in the first area. The eventual goal is to merge the results of the two areas of effort into a planning system that is capable of producing English sentences. Rather than relying on a specialized generation module, language generation is performed by a general problcm-.-solving system that has a great deal of knowlcdge about language. A planning system, named K^MI' (Knowlcdge and Modalitics Planncr), is currently under development that can take a high-lcvel goal-and plan to achieve it through both linguistic and non-linguistic actions. Means for satisfying multple goals can be integrated into a single utterance. Thi.~ paper examines the goals that arise in a dialog, and what actions satisfy those goals. It then discusses an example of a sentcnee which satisfies several goals simultaneously, and how K^MP will be able to produce this and similar utterances. This system represents an extension to Cohen's work on planning speech acts [3]. However, unlikc Cohen's system which plans actions on thc level of informing and requesting, but does not actually generate natural language sentences, KAMP applies general problcm-solving techniqucs to thc entire language gencration process, including the constructiun of the uttcrance. 1I. GoaLs and Actions used in Task Oriented Dialogues The participants in a dialogue have four different major types of goals which may be satisfied, either directly or indirectly, through utterances. Physical goals, involve the physical state of the world. The physical state can only be altered by actions that have physical effects, and so speech acts do not serve directly to achieve these goals. But since physical goals give rise to other types of goals as subgoals, which may in turn be satisfied by speech acts, they are important to a language planning system. Goals that bear directly on the utterances themselves are knowledge slate goals. discourse goals, and social goalx Any goal of a speaker can fit into one of these four categories. However, each category has many sob--categories, with the goals in each sub--category being satisfied by actions related to but different from those satisfying the goals of other sub--categories. Delineating the primary categorizations of goals and actions is one objective of this research. Knowledge state goals involve changes in tile beliefs and wants held by the speaker or the hearer. Thcy may be satisfied by several different kinds of actions. Physical actions affect knowledge, since ,as a minimum the agent knows he has performed the action. There are also actions that affect only knowledge and do not change the state o£ the world -- for example. reading, looking and speech acts. Speech acts are a special case of knowledge-producing actions because they do not produce knowledge directly, like looking at a clock. Instead, the effects of speech acts manifest thcmselves through the recognition of intention. The effect of a speech act, according to Searle. is that the hearer recognizes the speaker's intention to perform the act. The hcarer then knows which spceeh act has been performcd, and because of rules governing the communication processes, such as the Gricean maxims [4]. the hearer makes inferences about thc speaker's beliefs. Thcse inferences all affect the heater's own beliefs. Discourse goals are goals dial involve maintaining or changing the sthte of the discourse. For example, a goal of focusing on a different concept is a type of discourse goal [5, 9, 12]. The utterance Take John. for instance serves to move the participants' focusing from a general subject to a specific example. Utterances of this nature seem to be explainable only in terms of the effects they have, and not in terms of a formal specification of their propositional content Concept activation goals are a particular category of discourse goals. These are goals of bringing a concept of some object, state, or event into the heater's immediate coneiousness so that he understands its role in the utterance. Concept activation is a general goal that subsumes different kinds of speaker reference. It is a low-level goal that is not considered until the later stages of the planning process, but it is interesting because of the large number of interactions between it and higher-level goals and the large number of options available by which concept activations can be performed. 59 Social goals also play an important part in the planning of utterances. Thc,:e goals are fimdamentally different from other goals in that freqnently they are not effeCts to be achieved ~a~ much as constraiots on the possible behavior that is acceptable in a given situation. Social goals relate to politeness, and arc reflected in the surface form and content of tile utterance. However, there is no simple "formula" that one can follow to construct polite utterances. Do you know what time it Ls? may ~ a polite way to ask the time, but Do you know your phone number? is not very polite in most situations, but Could you tell me your phone number? is. What is important in this example is the exact propositional content of the utterance. People are expected to know phone numbers, but not necessarily what time it is. Using an indirect speech act is not a sufficient condition for politen¢~. This example illustrates how a social goal can mtluence what is said, as well as how it is expressed. Quite often the knowledge state goals have been ssragned a special priviliged status among all these goals. Conveying a propsition was viewed as the primary reason for planning an utterance, and the task of a language generator was to somehow construct an utterance that would be appropriate in the current context. In contrast, this rosen:oh attempts to take Halliday's claim [7] seriously in the design of a computer system: "We do not. in'fact, first decide what we want to say independcndy of the setting a,ld then dress it up in a garb that is appropriate to it in the context .... The 'content' is part of the total planning that takes place. "lhere is no clear line between the "what' and the 'how'..." The complexity that arises from the interactions of these different types of goals leads to situations where the content of an utterance is dictated by the requirement that it tit into the current context. For example, a speaker may plan to inform a bearer of a particular fact. Tbc context of the discou~ may make it impossible for the speaker to make an abrupt transition from the current topic to the topic that includes that proposition, To make this transition according to the communicative rules may require planning another utterance, Planning this utterance will in turn generate other goals of inforoting, concept activation and focusing. The actions used to satisfy these goals may affect the planning of the utterance that gave rise to the subgoal. In this situation, there is no clear dividing line between "what to say" and "how to say it". IlL An Integrated Approach to Planning Speech Acts A probem--solving system that plans utterances must have lhe ability to describe actions at different levels of abstraction, the ability to speCify a partial ordering among sequences of actions, and the ability to consider a plan globally to discover interactions and constraints among the actions already planned. It must have an intelligent method for maintaining alternatives, and evaluating them comparatively. Since reasoning about belief is very important in planning utterance, the planning system must have a knowledge representation that is adequate for representing facts about belief, and a deduction system that is capable of using that representauon efficiently. I Achieve(P) /' KAMI' is a planning system, which is currently beiug implemented, th:K builds on the NOAII planning system of Saccrdoti [10]. ]t uses a possible-worlds semantics approach to reasoning about belief" and the effects that various actions have on belief [8] and represents actions in a data structure called a procedural network. The procedural network consists of nt~es representing actions at somc level of abstraction, along with split nodes, which specify several parually urdercd sequences of actions that can be performed in any order, or perhaps even in parallel, and choice nodes which specify alternate actions, any one of which would achieve the goal. Figure 1 is an examplc of a simple procedural network that represents the following plan: The top--level goal is to achieve P. The downward link from that node m the net points to an expansion of actions and subgoals, which when performcd or achieved, will make P true in the resulting world. The plan consists of a choice betwcen two alternatives. In tile first the agent A does actions At and A2. and no commitment has been made to the ordering of these two parts of thc plan. After both of those parts havc been complctcly planned and executed, thcn action A] is performed in thc r~sulting world. The other alternative is for agent B to perform action A4. It is an important feature of KAMP that it can represent actions at several levels of abstraction. An INFORM action can be considered as a high level action, which is expanded at a lower level of abstraction into concept activation and focusing actions. After each expansion to a lower level of abstraction, ~.^MP invokes a set of procedures called critics that cxa,ninc tile plan globally, considering the interactions bctwccn its parts, resolving conflicts, making the best choice among availab;e alternatives, and noticing redundant acuons or actions that could bc subsumed by minor alterations in another part of the plan. Tile control structure could bc described as a loop that makes a plan, expands it. criticizes thc result, and expands it again, until thc entirc plan consists of cxccutablc actions. The following is an example of the type of problem that KAMP has been tested on: A robot namcd Rob and a man namcd John arc in a room that is adjacent to a hallway containing a clock. Both Rob and John are capable of moving, reading clocks, and talking to each other, and they each know that the other is capable of performing these actions. They both know that they are in the room, and they both know where tile hallway is. Neither Rob nor John knows what time it is. Suppose that Rob knows that the clock is in the I'tall, but John does not. Suppose further that John wants to know what time it is. and Rob knows he does. Furthermore, Rub is helpful, and wants to do what he can to insure that John achieves his goal. Rob's planning system must come up with a plan, perhaps involving actions by both Rob and John. that will result in John knowing what time it is. Rob can devise a plan using KAMP that consists of a choice between two alternalives, First, if John could find out where the clock is. he could go to the clock and read it, and in the resulting state would know the time. So. Rob can tell John where the clock is, "asoning that this information is sufficient for John to form and execute a plan that would achieve his goal. '~" DO(A t At) DO(A t A2} DO(B, A4) J Figu re 1 A Simple Procedural Network Do(A, A3) I 60 f Actlieve(Oetached(Bracel, Como)) I ActtievelLoo.se(Boltl II i j Achieve(KnowWhaOs(Aoor. E]oltl)) ciaieve( KnowWhalls( AI)l~r. Loosen(Bolt I .Wfl))) chieve(t(nowWhatls L--~ ' Achieve(Has .=,.=, [ Acllieve(Know(Ap,r.On(Tat,le.Wrl))) ' ~ Oo(Aoor. Get(Wrl. Tattle;) Figure 2 A Plan to Remove a Bolt The second alternative is t'or Rob to movc into the hall and read the clock himself, move back into the room. and tcU John the time. As of the time of this writing. KAMP has been implemented and tested on problems involving the planning of high level speech act descriptions, and pcrfonns tasks comparable to the planner implcmcntcd by Cohen. A more complete description of this planner, and the motivation for its design can be found in [],]. The following example is intended to give the reader a feeling for how the planner will prncced in a typical situation involving linguistic planning, but is not a description of a currently working system. An expert and an apprentice are cooperating in the task of repairing an air compressor. The expert is assumed to be a computer system that has complete knowledge of all aspects of the task, but has no means of manipulating the world except by requesting the apprentice to do things. and furnishit~g him or her with the knowledge necdcd to complete the task. Figure 2 shows a partially completed procedural network. The node at the highest level indicates the planner's top-level goal. which in this case is Oo(Ap,r. Loosen(Bolt1. Wrll) Assume that the apprentice knows that rite part is to be removed, and wants to do the removal, but does not know of a procedure ['or doing it. This situation would hold if the goal marked with an asterisk in figure 2 were unsatisfied. The expert must plan an action to inform ri~e apprentice of what the desired action is. This goal expands into an INFORM action. The expert also beiicv~ that the apprentice does not know where the wrench is, and plans another [NI:ORM action to tell him where it is located. The planner tests d~c ACIIIt:,VE goals to see if it bclicves d~at any of them arc ,already true in die current state of the world. In the case we arc considering Y.AMFS model of the hearer should indicate that he ktlows what the bolt is. and what the wrench is, but doesn't know what the action is. i.e. that he should use that particular wrench to loosen that bolt, and he doesn't know the location of the wrench. [f informing actions ~e planned to satisfy those goals that are not already satisfied; then that part of the plan looks like Figure 3. Each of the INFORM actions is a high-level action that can be expanded. The planner has a set of standard expansions for actions of this type. In removing a particular object (BRACEI) from an air compressor, [t knows that this goal can be achieved by the apprentice executing a particular unfastening operation involving a specific wrench and a specific bolt, "ll~e expert knows that the apprentice can do the action if he knows what the objects involved in the cask are. and knows what the action is (i.e. that he knows how to do the ,action). This is reflected in the second goal in the split path in the procedural network. Since the plan also requires obtaining a wrench and using it, a goal is also established that tile apprentice knows where the wrench is: hence the goal ^CIllEvE(Know(Apprentice. On(Table. Wr].))). NOAII, these actions were written in SOUP code. In this planner, they are represented in situation-action rules. The conditional of the rule involves tests on the type of action to be performed, the hearer's knowledge, and social goals. The action is to select a particular strategy for expanding the action. In this case, a rule such as /[you are expanding an inform of what an action involving the hearer as agent is. then use an IMPERATIVE syntactic construct to describe the action. The planner then inserts the expansion shown in Figure 4 into the plan. ~ ~Achilve(KnowWhatls(Al~Dr.Lo~m~(Bolt 1 .Wrl ))) I DO( E xoer t.lnformval(A 130r.L0osen(Bo~t I ,Wr 1 ))) "%~Acilieve( KnowWhatis ~ Achieve(Hgs I I I ./ J Ac hieve(Kn°w('~ pot 'On(Table'Wr I ))) I I I O~( Exp.lntor m(A~pr.OnlTahle.Wr Ill I I Figure 3 Planning to Inform Do(Agtor. Get(We I)) I 61 I Dot ExD,int ormV~d(AnDr,Loosen(BoUl .Wrl ))) I ) DolExpert. ,~V( "Loo~n "l) Do(Expert, CACT(AgDf. Wfl)) IN~f Figure 4 Expanding the INFORM Act This sub-plan is marked by a tag indicating that it is to be realized by an Unpcrative. The split specifics which h)wer level acuons arc performed by the utterance of the imperative. At some point, a critic will choose an ordering for the actions. Without further information the scntcncc could be realizcd in any of the following ways, some of which sound strange when spoken in islolation: Loosen Boltl with Wrl. With Wrl loosen BOltl. Boltl loosen with Wrl. The first sentence above sounds natural in isolation. ]'he other two might be chosen if a critic notic~ a need to realize a focnsmg action that has been plauncd. For example, the second sentence shiftS thc focus to the wrench instead of the bolt` and would be useful in organizing a series of instructions around what tools to use. The third would be used in a discourse organized around what object to manipulate aexL Up to this point` the phmning process ilas been quite :;traighdorward, since none of the critics have come into piny. However, since there arc two INFORM actions on two branches of the same split, thc COMBINE-CONCEPT- ACTIVATION critic is invoked. This critic is invoked whenever a plan contains a concept activation on one branch of the split, and an inform of some property of the activated object on the other branch. Sometimes the planner can combine the two informing actions into one by including the property description of one of the intbrmmg actS into the description that is being used for the concept activation. In this particular example, ~ critic would av.,'~h to the Do(Expe~ CACT(Appr.. Wri)) action the copetraint that one of the realizing descriptors must be ON(Wri. Table). and the goal that the apprentice knows the wrench is on the table is marked as already satisfied. Another critic, the REDUNDANT-PATII critic, notices when portions of two brances of a split contain identical actions, and collapses the two branches into one. This critic, when applied to utterance plans will oRen result in a sentence with an and conjunction. The critic is not restricted to apply only m linguistic actions, and may apply to other types of actions as well. Or.her critics know about acuon subsumption, and what kinds of focusing actions can be realized in terms of which linguistic choices. One of these action subsumption critics can make a decision about the ordering of the concept activations, and can mark discourse goals as pha,. ")ms. in U is example, there are no spccific discourse goalS, so it is pussibtc to chose the default verb-object°instrument ordering. On the next next expansion cycle, the concept activations must be expanded into uttcrances. This means planning descriptors for the objects. Planning the risht description requires reasoning about what the hearer believes about the object` describing it as economically as possible, and then adding the additional descriptors recommended by the action subsumption critic. The final step is realizing the descriptors in natural language. Some descriptors have straightforward realizations ,as lexical items. Otbers may require planning a prepositional phrnsc or a relative clause. IV. Formally dcfi,ing H);guistic actions If actions are to be planned by a planning system, thcy must be defined formally so they can bc used by the system. This means explicitly stating the preconditions and effects of each action. Physical actions havc received attention in the literature on planning, but one ~pect of physical actions Lhat has been ignored arc thcir cffccts on kuowlcdgc. Moorc [8] suggestS an approach to formalizing, the km)wicdgc cffccL'; of physEal actions, so [ will not pursue Lhat further at this time. A fairly large amount of work has been done on the formal specification of speech acts un the level of informing and requesting, etc. Most of this work has bccn done by Scaric till, and has been incorporatcd into a planning system by Cohen [3]. Not much has been done to formally specify the actions of focusing and concept activation. Sidncr [12] has developed a set of formal rules for detecting focus movement in a discourse, and has suggested that these rules could be translated into an appropriate set of actions that a generation system could use. Since there are a number of well defined strategies that speakers use to focus on different topics. I suggest that the preconditions and effectS of these strategies could be defined precisely and they can bc incorporated as operators in a planning systcm. Reichmann [9J describes a number of focusing strategies and the situations in which they are applicable. The focusing mechanism is driven by the spcakcr's goal that the bearer know what is currently being focused on. Tbis particular type of knowledge state goal is satisfied by a varicty of different actions. These actions have preconditions which depend on what the current state of the discourse is, and what type of shift is taking place. Consider the problem of moving the focus back to the previous topic of discussion after a brief digression onto a diEerent hut related topic. Reichmaon pointS out that several actions arc available. Onc soch action is the utterance of "anyway'* which signals a more or tcss expected focus ~hffL. She claims that the utterance of "but" can achieve a similar effect, but is used where the speaker believes that the hearer believes that a discu~ion on the current topic will continue, and Lhat presupposition needs to be countered. Each of these two actions will be defincd in the planning system as operator. The °'but" operator will have as an additional precondition that the hearer believes that the speaker's next uttorance will be part of the current context. Both operators will hay= the effect that the hearer believes that the speaker is focusing on the prcvious topic of discussion. Other operators that are available includc cxplicity labeled shifts. This operator exp. ~ds rata planning an INFORM of a fOCUS shill The previous example of Take John. for instance, is an example of such an action. The prccLsc logical axiomiuzation of focusing and the prccisc definitions of each of these actions is a topic of curre..t research. The point being made here is that these focusing actions can bc spccificd formally, One goal of this research is to formally describe linguistic actions and other knowledge producing actions adequately enough to demonstrate the fcasibility of a language plmming system. V. Current Status The K^MP planner described in this paper is in the early stages of implementation. It can solve interesting problems in finding multiple agent plans, and plans involving acquiring and using knowlcge. It has not bee. applied directly to language yet` but this is the next stcp in research. 62 Focusing actions need to be described formally, and critics have to be defined precisely and implemented. This work is currendy in progress. Although still in its early stages, this approach shows a great deal of promise for developing a computer system that is capable of producing utterances that approach the richness that is apparent in even the simplest human communication. REFERENCES [1] Appelt, Douglas, A Planner for Reasoning about Knowledge mid Belief, Proceedings of the First Conference of the American Association for Artificial Intelligence, 1980. [2] Austin, J., How to Do Things with Words, J. O. Urmson (ed.), Oxford University Pre~ 1962 [3] Cohen, Philip, On Knowing What to Say: Planning Spech Acts, Technical Report #118. University of Toronto. 1.978 [4] Gricc, H. P., Logic and Coversation, in Davidson, cd., The Logic of Grammar., Dickenson Publishing Co., Encino, California, [975. [5] Grosz, Barbara J., Focusing and Description in Natural Language Dialogs, in Elements of Discoursc Understanding: Proccedings of a Workshop on Computational Aspects of Linguistic Structure and Discourse Setting, A. K. Joshi et al. eds., Cambridge University Press. Cambridge. Ealgland. 1980. [6] Halliday, M. A. K., Language Structure and Language Ftmctiol~ in Lyons, cd., Ncw Horizons in Linguistics. [7] Halliday, M. A. K., Language as Social Semiotic, University Park Press, Baltimore, Md., 1978. [8] Moore. Robert C., Reasoning about Knowledge and Action. Ph.D. thesis, Massachusetts Institute of Technology. 1979 [9] Reichman. Rachel. Conversational Coherency. Center for Research in Computing Technology Tochnical Rcport TR-17-78. Harvard University. 1978. [10] Sacerdod, Earl, A Structure for Plans and Behavior. Elsevier North- Holland, Inc.. Amsterdam, The Nedlcriands, 1.977 ['l_l] Searte, John, Speech Acts, Cambridge Univcrsiy Press, 1969 [12] Sidner, Candace L. Toward a Computational Theory of Definite Anaphora Comprehension in English Discourse. Massichusetts Institute of Technology Aritificial Intelligence Laboratory technical note TR-537, 1979. 63
1980
16
Interactive Discourse: Influence of the Social Context Panel Chair's Introduction Jerry R. Hobbs SRI International Progress on natural language interfaces can perhaps be stimulated or directed by imagining the ideal natural language system of the future. What features (or even design philosophies) should such a system have in order to become an integral part of our work environments? What scaled-down versions of these features might be possible in the near future in "simple service systems" [2]? These issues can be broken down into the following four questions: i. What are the significant features of the environment in which the system will reside? The system will be one participant in an intricate information network, depend- ing on a continually reinforced shared complex of knowl- edge [9]. To be an integral part of this environment, the system must possess some of the shared knowledge and perhaps must participate in its reinforcement, e.g. via explanations, [9], [2]. 2. Investigations of person-person communication sho111d tell us what person-system communication ought to be like. Face-to-face conversation is extraordinarily rich in the information that is conveyed by various means, such as gesture, body position, gaze direction [4], [8]. In addition to conveying propositional content or infor- mstion, what are the principal functions that moves in conversation perform? a. Organization of the interaction, regulation of turns [7], [i]. In the natural language dialog systems of today, each turn consists of a sentence or less. In ex- periments done at SRI on instruction dialogs between people over computer terminals, the instructor's turns usually involve long texts. It was discovered that the student needs a way of interrupting. That is, some sort of turn-taking mechanisms are required, what can we learn from the turn-taking mechanisms people use? b. Orientation of the participants toward each other, including recognition [6], expressions of solidarity and indications of agreement and disagreement [3], meta- comments on the direction of the conversation [8] or the reasons for certain utterances ([9] on discourse expla- nations). c. Maintenance of the channel of cO~unication, implic- it acknowledgment or verification of information con- veyed [2]. Recovery from mistakes and breakdowns in commtunication [8], e.g. via flexibility in parsing and interpretation [2]; via explicit indications of in- comprehension [2] and repairs [5]. In natural language systems of today, when the user makes a mistake and the system fails to interpret the input, the user must usu- ally begin over again. The system cannot use whatever it did get from the mistake to aid in the interpretation of the repair. People are more efficient, what are the principal means of repair that people use, and how can they be carried over to natural language systems? taining one's role, e.g. as a competent, cooperative participant (cf. [8]; [9]; [i] for the role of speech style; [4] for defense of competence). In addition to the system having a model of the user, the user will have a model of the system, determined by the nature of his interaction with it. The system should thus be tailored to convey an accurate image of what the system can do. For example, superficial politeness or fluency ("Good morning, Jerry. What can I do for you today?") is more likely to mislead the user about the system's capabilities than to ease the interaction. What the system does, via lexical choice, indirect speech acts, polite forms, etc., to maintain its role in the inter- action should arise out of a coherent view of what the role is. The linguistic competence of the system is an important element of the image it conveys to the user [2]. 3. When we move from face-to-face conversations to dialogs over computer terminals, the communication is purely verbal. The work done non-verbally now has to be realized verbally. How are the realizations of the above functions altered over the change of channels [6]? We know, for example, that there are more utter- ances showing solidarity and asking for opinions, because this is work done non-verbally face-to-face [3]. Some things that occur face-to-face (e.g. tension release, jokes) seem to be expendable over computer terminals, where each utterance costs the speaker more. The messages take longer to produce, are less transi- tory, and can be absorbed more carefully, so there is less asking for orientation, elaboration, and correction [3]. What devices are likely to be borrowed from related but more familiar communication frames [i]? Possible frames are letters or telephone conversations. 4. Should and how can these functions be incorporated into the ideal natural language systems of the far future and the simple service systems of the near future [2], [8]? REFERENCES I. Carey, 3. Interactive television: A frame analysis. From M. MOSS (ed.), Two-WayCable Television: An Evaluation of community Uses in Reading, Pennsylvania. Final report to the National Science Foundation. 1978. 2. Hayes, P. and R. Reddy. An anatomy of graceful interaction in spoken and written man-machine conununica- tion. Computer Science Department, Carnegie-Mellon University. 1979. 3. Hiltz, S. R., K. Johnson, C. Aronovitch, and M. Turoff. Face to face vs. computerized conferences: A controlled experiment. Draft final report for grant with Division of Mathematical and Computer Sciences, National Science Foundation. 1980. d. Building and reinforcing the mutual knowledge base, i.e. the knowledge the participants share and know they share, etc. [9]. Linking new or out-of-the-ordinary information to snared knowledge via explanations [9], [2]. e. Inferring others' goals, knowledge, abilities, focus of attention [8], [2], [4]. The system should have a model of the user and of the cormnunication situation [8]. f. ConTaunicating one's own goals, knowledge, abilities, focus of attention [8], [2]. Establishing and main- 4. Hobbs, J. and D. Evans. Conversation as planned behavior. Technical Note 203. SRI International. 1979. 5. Sacks, H., E. Schegloff and G. Jefferson. A simplest systematics for the organization of turn-taking for conversation. Language, Vol. 50, no. 2, 696-735. 1974. 6. Schegloff, E., G. Jefferson and H. Sacks. The preference for self-correction in the organization of repair in conversation. Language, vol. 53, no. 2, 361-382. 1977. 7. Schegloff, E. Identification and recognition in 65 telephone COnversation openings. In G° Psa~has (ed.), Everyday Language: Studies in EthnometbodoloqY. 23-78. 8. Thomas, J. A design-interpreuation analysis of natural English with applications to man-computer inter- action. International Journal of Man-Machine Studies, Vol. I0, 651-668. 1978. 9. Wynn, E. Office conversation as an informauion medium. Ph.D. Thesis, Department of Anthropology, University of California, Berkeley. 1979. 6B
1980
17
PARALANGUAGE IN COMPUTERMEDIATED COMMUNICATION John Carey Alternate Media Center New York University This paper reports on some of the components of person to person communication mediated by computer conferenc- ing systems. Transcripts from two systems were analysed: the Electronic Information and Exchange System (EIES), based at the New Jersey Institute of Technology; and Planet, based at Infomedia Inc. in Palo Alto, California. The research focused upon the ways in which expressive communication is encoded by users of the medluml. i. INTRODUCTION The term paralanguage is used broadly in this report. It includes those vocal features outlined by Trager (1964) as well as the prosodic system of Crystal (1969). Both are concerned with the investigation of linguistic phenomena which generally fall outside the boundaries of phonology, morphology and lexical analysis. These phenomena are the voice qualities and tones which communicate expressive feelings, indicate the age, health and sex of a speaker, modify the meanings of words, and help to regulate interaction between speak- ers. Paralanguage becomes an issue in print communication when individuals attempt to transcribe (and analyse) an oral presentation, or write a script which is to be delivered orally. In addition, paralinguistlc analysis can be directed towards forms of print which mimic or contain elements of oral co~munlcatlon. These include comic strips, novels, graffitti, and computer confer- encing (see Crystal and Davy 1969). The research reported here is not concerned with a direct comparison between face-to-face and computer mediated communication. Such a comparison is useful, e.g. it can help us to understand how one form borrows elements from the other (see section 5.), or aid in the selectldn of the medium which is more appropriate for a given task. However, the intent here is simpler: to isolate some of the paralingulstic features which are present in computer mediated communication and to begin to map the patterning of those features. simple message sending (.electronic mail), task related conferencing, and fun (e.g. Jokes and conferences on popular topics). Bills for usage were paid by the organizations involved, not the individuals themselves. These elements within the frame may affect the style of interaction. One concern in frame analysis is to understand differen- ces in a situation which make a difference. Clearly, there is a need to investigate conditions not included in this study in order to gain a broader understanding of paralinguistic usage. Among the conditions which might make a difference are: the presence of a secretary in the flow of information; usage based upon narrow task communications only; and situations where there is a direct cost to the user. 3. FEATURES The following elements have been isolated within the transcripts and given a preliminary designation as paralinguistic features. 3.1. VOCAL SPELLING These features include non standard spellings of words which bring attention to sound qualities. The spelling may serve to mark a regional accent or an idiosyncratic manner of speech. Often, the misspelling involves repetition of a vowel (drawl) or a final consonant (released or held consonant, with final stress). In addition, there are many examples of non standard con- tractions. A single contraction in a message appears to bring attention (stress) to the word. A series of contractions in a single message appears to serve as a tempo marker, indicating a quick pace in composing the message. /biznls/ /weeeeell/ /breakkk/ 2. THE FRAME Computer conferencing may be described as a frame of social activity in Goffman's terms (1974). The computer conferencing frame is characterized by an exchange of print communication between or among individuals. That is, it may involve person to person or person to group communication. The information is typed on a computer terminal, transmitted via a telephone line to a central computer where it is processed and stored until the intended receiver (also using a computer terminal and a telephone llne) enters the system. The received information is either printed on paper or displayed on a television screen. The exchange can be in real time, if the users are on the system simultaneously and linked together in a common notepad. More typically, the exchange is asynchronous with several hours or a few days lapse between sending and receiving. In all of the transcripts examined for this study, the composer of the message typed it into the system. Further, the systems were used for many purposes: /y'all/ /Miami Dade Cmt7 Coll Life Lab Pgm/ Figure i. Examples of Vocal Spelling Soma of the spellings shown above can occur through a glitch in the system or an unintended error by the composer of the message. Typically, the full context helps the reader to discern if the spelling was intentional. 3.2. LEXICAL SURROGATES Often, people use words to describe their "tone of voice" in the message. This may be inserted as a parenthetical comment within a sentence, in which case it is likely to mark that sentence alone. Alternative- ly, it may be located at the beginning or end of a message. In these instances, it often provides a tone for the entire message. 1. The research was supported by DHEW Grant No. 54-P- 71362/2/2-01 In addition, vocal segregates (e.g. uh huh, hmmm, yuk yuk) are written commonly within the body of texts. 67 /What was decided? I like the idea, but then again, it was mine Oshe said blush- ingly)./ /Boo, boo Horror of horrors! ti65 DOESN'T seem to cure all the problems involved in transmitting files./ Figure 2. Examples of Lexlcal Surrogates 3.3. SPATIAL ARRAYS Perhaps the most s~rlklng feature of computer confer- encing is the spatial arrangement of words. While some users borrow a standard letter format, others treat the page space as a canvass on which they paint wi~h words and letters, or an advertisement layout in which they are free to leave space between words, skip lines, and paragraph each new sentence. Some spatial arrays are actual graphics: arrangements of letters to create a picture. Hiltz and Turoff (1978) note the heavy use of graphics at Christmas time, when people send greeting cards through the conferencing system. Zn day to day messaging, users often leave space between words (indicating pause, or setting off a word or phrase), run words together (quickening of tempo, onomatopoeic effect), skip lines within a paragraph (~o setoff a word, phrase or sentence), and crea~e paragraphs to lend visual support to the entire message or items within it. In addition, many messages contain headlines, as in newspaper writing. /One of our units here Just makes an awfulhowllng noise. / 0OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS/ /$SSSSSS$$$$$SS$$$$S$$$$SSSS$$SSSSS$ When the next bill comes in from EIES/Telanet, you may also be interested/ Figure 3. Examples of Spatlal Arrays 3.4. MANIPULATION OF GRAMMATICAL MARKERS Gr-,,-m~ical markers such as capitalization, periods, ccnmlaa, quotation marks, and parentheses are manipulated by users to add stress, indicaue pause, modify the tone of a lexlcal item and signal a chan~e of voice by the composer. For eY-mple, a user will employ three exclamation marks at the end of a sentence ~o lend incensity to his point. A word in the middle of a sentence (or one sentence in a message) will be capitalized and ~hereby receive stress. A series of des! os between syllables of a word can serve to hold the preceding syllable and indicate s~ress upon it or the succeeding syllable. Parentheses and quotation marks are used commonly to indicate that the words contained within them are to be heard with a different tone than the rest of the message. A series of periods are used to indicate pause, as well as to indicate in~ernal and terminal Junctures. For example, in some messages, composers do not use commas. At points where a com-m is appropriate, three periods are employed. At the end of the sentence, several periods (the number can vary from 4 to more than 20) are used. This system indicates to ~he reader hor.h the grammatical boundary and the length of pause between words. The Electronic Information and Exchange System employs some of these gr---,-tical marker manipulations in the interface between user and system. For example, they instruct a user to respond with question marks when he does not know what to do at a comm"nd point. One question mark indicates "I don't understand what EIES wants here," and will yield a brief explanation from the system. Two question marks indicate "I am ver 7 confused" and yield a longer explanation. Three ques- tion marks indicate "I am totally lost" and put the user in direct touch with the system monitor. /Welcome Aboardl::~/ /This background is VERY important, since it makes many people (approprlately, I think) aware about idea./ /THERE IS STILL SOME CONFUSION ON DATES FOR PHILADELPHIA. MIKE AND I ARE PERPLEXED:?/ /At this point, I think we should include a BROAD range of ideas -- even if they look unworkable. / /Paul...three quick points ...... first...the paper/ Figure 4. Manlpulaclon of Gra~natical Markers 3.5. MINUS FEATURES The absence of certain features or expected work in composition may also lend a tone to the message. For example, a user may not correct spelling errors or glitches introduced by the system. Similarly, he may pay no attention to paragraphing or capltalization. The absence of such features, particularly if they are clustered together in a single message, can convey a relaxed tone of familiarity with the receiver or quick- ness of pacing (e.g. when the sender has a lot of work to do and must compose the message quickly). 4. PATTERNING OF FEATURES Ig can be noted, first, that some features mark a short syllabic or polysyllabic segment (e.g. capitalization, contraction, and vocal segregates), while others mark full sentences or the entire message (e.g. a series of exclamation points, letter graphics, or an initial parenthetical coeN"ent). Second, it is revealing that many of these features have an analogic structure: in some manner, they are llke the tone they represent. For example, a user may employ more or fewer periods, more or fewer question marks ro indicate degrees of pause or degrees of perplexity. Paralanguage in every- day conversation is highly analogic and represents feelings, moods and states of health which do not (apparently) lend themselves to the digital structure of words. Parallngulstic features in computer conferenclng occur, often, at points of change in a message: change of pace, change of topic, change of ~one. In addition, many of the features rely upon a contrastive structure to co---unicate meaning. That is, a message which is typed in all caps does not communicate greater intensity or stress. Capitalization must occur contrastlvely over one or two words in an othertrlse normal sentence or over one or two sentences in a message which contains some normal capitalization. Most paralinguistlc features can have more than one meaning. Reviewed in is lation, a feature might indi- cate a relaxed tone, an intimate relation with the receiver, or simply sloppiness in composition. Readers must rely upon the surrounding context (both words and other paralinguistic features) to narrow the range of possible meanings. 68 The intended receiver of a message, as well as an outsider who attempts to analyse transcripts, must cope with the interpretation of paralinguistic features. Initially, the reader must distinguish glitches in the system and unintended typing errors from intentional use of repetition, spacing, etc. Subsequently, the reader must examine the immediate context of the feature and compare the usage with similar patterns in the same message, in other messages by the composer, and/or in other messages by the general population of users. 5. DEVELOPMENT OF A CODE The findings presented in this study are taken from a limited set of contexts. For this reason, they must be regarded as a first approximation of paralinguistic code structure in computer conferencing. Moreover, the findings do not suggest that a clear code exists for the community of users. Rather, the code appears to be in a stage of development and learning. The study has helped to define some differences among users which appear to make a difference in the pare- linguistic features they employ. In the corpus of transcripts examined, usage varied between new and experienced participants, as well as between infrequent and frequent participants. Generally, experienced and frequent participants employed more paralinguistic features. However, idiosyncratic patterns appear to be more important in determining usage. The findings serve more to define questions for subsequent study than to provide answers about user variations. In addition, It is clear that the characteristics of the computer terminals (TI 745s, primarily), as well as system characteristics, provided many of the compon- ents or "bricks" with which paralinguistlc features were constructed. For example, the repeat key on the terminal allowed users to create certain forms of graphics. Also, star keys, dollar signs, colons and other available keys were employed to communicate paralinguistic information. System terms to describe a mode of operation (e.g. notepad, scratchpad, message, conference) may also influence development of a code of usage by suggesting a more formal or informal exchange. Finally, it may be noted that early in their usage, some participants appeared to borrow formats from other media with which they were familiar (e.g. business letters, telegrams, and telephone conversations). Over time, patterns of usage converged somewhat. However, idiosyncratic variation remained strong. 6. CONCLUSION A few conclusions can be drawn from this study. First, the presence of paralinguistic features in computer conferencing and the effort by users to communicate more information than can be carried by the words themselves, suggest that people feel it is important to be able to communicate tonal and expressive informa- tion. Second, it is not easy to communicate this information. Users must work in computer conferencing to communicate information about their feelings and state of health which naturally accompanies speech. While there does not appear to be a unified and identl- fiable code of paralinguistic features within confer- encing systems or among users of the systems, the collective behavior of participants may be creating o n e . REFERENCES Crystal, David Prosodic Systems and Intonation in English. Cambridge: Cambridge University Press 1969. Crystal, David and Davy, Derek Investigating English Style. Bloomington: Indiana University Press 1969, Goffman, Erring Frame Analysis. New York: Harper and Row 1974, Hiltz, Starr Roxanne and Turoff, Murray The Network Nation. Reading, Massachusetts: Addlson-Wesley 1978. Trager, George "Paralanguage: A First Approximation," in Dell Hymes (ed.) Language in Culture and Society. New York: Harper and Row 1964. 89
1980
18
Expanding the Horizons of Natural Language Interfaces Phil Hayes Computer Science Department, Carnegie-Mellon University Pittsburgh, P A 15213, USA Abstract Current natural language interfaces have concentrated largely on determining the literal "meaning" of input from their users. While such decoding is an essential underpinning, much recent work suggests that natural language interlaces will never appear cooperative or graceful unless they also incorporate numerous non-literal aspects of communication, such as robust communication procedures. This toaper defends that view. but claims that direct imitation of human performance =s not the best way to =mplement many of these non-literal aspects of communication; that the new technology of powerful personal computers with integral graphics displays offers techniques superior to those of humans for these aspects, while still satistying human communication needs. The paper proposes interfaces based on a judicious mixture of these techniques and the still valuable methods of more traditional natural language interfaces. 1. Introduction Most work so far on natural language communication between man and machine has dealt with its literal aspects. That is. natural language interlaces have implicitly adopted the position that their user's input encodes a request for intormation of; action, and that their job is tO decode the request, retrieve the information, or perform the action, and provide appropriate output back to the user. This is essentially what Thomas [24J cnlls the Encoding-Decoding model of conversation. While literal interpretation is a basic underpinning of communication, much recent work in artificial intelligence, linguistics, and related fields has shown that it is tar from the whole story in human communication. For example, appropriate interpretation of an utterance depends on assumptions about the speaker's intentions, and conversely, the sl.)eaker's goals influence what is said (Hobbs [13J, Thomas [24]). People often make mistakes in speaking and listening, and so have evolvod conventions for affecting regalrs-(Schegloll et el. [20J). There must also be a way of regulating the turns of participants in a conversation (Sacks et el. [10t). This is just a sampling of what we will collectively call non literal ~lspects ol communication. The primary reason for using natural language in man-machine communication is to allow the user to express himsell mtturallyo and without hawng to learn a special language. However, it is becoming clear that providing for n,'ttural expression means dealing will1 tile non-literal well as the literal aspects ol communication; float the ability to interpret natural language literaUy does not in itself give a man-machine interlace the ability to communicate naturally. Some work on incorporating these non-literal aspects of communication into man-machine interfaces has already begun([6, 8, 9, 15, 21, 25]). The position I wish to stress in this paper is that natural language interfaces will never perform acceptably unless they deal with the non-literal as well as the literal aspects of communication: that without the non-literal aspects, they will always appear uncooperative, inflexible, unfriendly, and generally stupid to their users, leading to irritation, frustration, and an unwillingness to continue to be a user. This pos=tion is coming to be held fairly widely. However, I wish to go further and suggest that, in building non-literal aspects of communication into natural-language interfaces, we should aim for the most effective type of communication rather than insisting that the interface model human performance as exactly as possible. I believe that these two aims are not necessarily the same. especially given certain new technological trends (.lis(J ti ,'~s£~l below. Most attempts to incorporate non-literal aspects of communication into natural language interlaces have attempted to model human performance as closely as possible. The typical mode of communication in such an interface, in which system and user type alternately on a single scroll of pager (or scrolled display screen), has been used as an analogy to normal spoken human conversation in Wlllcll contmunicallon takes place over a similar half-duplex channel, i.e. a channel that only one party at a time can use witllout danger of confusion. Technology is outdating this model. Tl~e nascent generation of powerful personal computers (e.g. the ALTO ~23} or PERQ [18J) equipped with high-resolution bit-map graphics display screens and pointing devices allow the rapid display of large quantities of information and the maintenance of several independent communication channels for both output (division ol the screen into independent windows, highlighting, and other graphics techniques), and input (direction of keyboard input to different windows, poinling ,~put). I believe that this new technology can provide highly effective, natural language-based, communication between man and machine, but only il the half-duplex style of interaction described above is dropped. Rall~er than trying to imitate human convets~mon d=rectty, it will be more fruitful to use the capabilities of this new technology, whicl~ in some respects exceed those possessed by humans, to achieve the snme ends as the non-literal aspects of normal human conversation. Work by. for instance, Carey [31 and Hiltz 1121 shows how adaptable people aro to new communication situ~.~tlons, and there is every reason Io believe that people will adapt well to an interaction in which their communication ne~,ds are satisfied, even if they are satislied in a dilterent way than in ordinary human conversation. In the remainder of the paper I will sketch some human communication needs, and go on to suggest how they can be satisfied using the technology outlined above. 2. Non-Literal Aspects of Communication In this section we will discuss four human communication needs and tile non-literal aspects of communication they have given rise to: • non-grammatical utterance recognition • contextually determined interpretation • robust communication procedures • channel sharing The account here is based in part on work reported more fully in [8, 9]. Humans must deal with non-grammatical utterances in conversation simply because DePute produce them all the time. They arise from various sources: people may leave out or swallow words; they may start to say one thing, stop in the middle, and substitute something else; they may interrupt themselves to correct something they have just said; or they may simply make errors of tense, agreement, or vocabulary. For a combination of these and other reasons, it is very rare to see three consecutive grammatical sentences in ordinary conversation. Despite the ubiquity of ungrammaticality, it has received very little attention in the literature or from the implementers of natural-language interfaces. Exceptions include PARRY {17]. COOP [14], and interfaces produced by the LIFER [11] system. Additional work on parsing ungrammatical input has been done by Weischedel and Black [25], and 71 Kwasny and Sandheimer [15]. AS part of a larger project on user interfaces [ 1 ], we (Hayes and Mouradian [7]) have also developed a parser capable of dealing flexibly with many forms of ungrammaticality. Perhaps part of the reason that flexibility in Darsmg has received so little attent*on in work on natural language interlaces is thai the input is typed, and so the parsers used have been derived from those used to parse written prose. Speech parsers (see for example I101 or 126i) have always been much more Ilexible. Prose is normally quite grammatical simply because the writer has had time to make it grammatical. The typed input to a computer system is. produced in "real time" and is therefore much more likely to contain errors or other ungrammaticalities. The listener al any given turn in a conversation does not merely decode or extract the inherent "meaning" from what the speaker said. Instead. lie =nterprets the speaker's utterance in the light at the total avnilable context (see for example. Hoblo~ [13], Thomas [24J, or Wynn [27]). In cooperative dialogues, and computer interfaces normally operate in a cooperative situation, this contextually determined interpretation allows the participants considerable economies in what they say, substituting pronouns or other anaphonc forms for more complete descriptions, not explicitly requesting actions or information that they really desire, omitting part=cipants from descriphons of events, and leaving unsaid other information that will be "obvious" to the listener because of the Context shared by speaker and listener. In less cooperative situations, the listener's interpretations may be other than the speaker intends, and speakers may compensate for such distortions in the way they construct their utterances. While these problems have been studied extensively in more abstract natural language research (for just a few examples see [4, 5, 16]). little attention has been paid to them in more applied language wOrk. The work of Grosz [6J and Sidner [21] on focus of attention and its relation tO anaphora and ellipsis stand out here. along with work done in the COOP [14] system on checking the presuppositions of questions with 8 negative answer, in general, contextual interpretation covers most of the work in natural language proces~ng, and subsumes numerous currently intractable problems. It is only tractable in natural language interfaceS because at the tight constraints provided by the highly restricted worlds in which they operate. Just as in any other communication across a noisy channel, there is always a basic question in human conversstion of whether the listener has received the speaker's tltterance correctly. Humans have evolved robust communication conventions for performing such checks with considerable, though not complete, reliability, and for correcting errors when they Occur (see Schegloff {20i). Such conventions include: the speaker assuming an utterance has been heard correctly unless the reply contradicts this assumbtion or there is no reply at all: the speaker trying to correct his own errors himself: the listener incorporating h=s assumptions about a doubtful utterance into his reply; the listener asking explicitly for clarification when he is sufficiently unsure. This area of robust conimunlcatlon IS porhaps II~e non-literal aspect of commumcat~on mOSt neglected in natural language work. Just a few systems such as LIFEPl ItlJ and COOP [141 have paid even minimal attenhon Io it, Intereshngiy, it ~S perhaps the area in which Ihe new technology mentioned above has the most to oiler as we shall see. Fill[lily. the SllOken Dart of a humlin conversation takes place over what is essenllully a s=ngle shared channel. In oilier words, if more than one person talks at once. no one can understand anything anyone else is saying. There are marginal exceptions to this. bul by and large reasonable conversation can only be conducted if iust one person speaks at a time. Thus people have evolved conventions for channel sharing [19], so that people can take turns to speak. Int~. =.stmgly, if people are put in new communication situations in which the standard turn-taking conventions do not work well. they appear quite able to evolve new conventions [3i. AS noted earlier, computer interfaces have sidestepped this problem by making the interaction take place over a half-duplex channel somewhat analogous to the half-duplex channel inherent m sPeech, i.e. alternate turns at typing on a scroll el paper (or scrolled display screen). However, rather than prowding flexible conventions for changing turns, such =ntertaces typically brook no interrupt=arts while they are typing, and then when they are finished ins=st that the user type a complete input with no feedback (apart from character echoing), at which point the system then takes over the channel again. in the next Section we will examine how the new generation of interface technology can help with some of the problems we have raised. 3. Incorporating Non-Literal Aspects of Communication into User Interfaces If computer interfaces are ever to become cooperative and natural to use, they must incorporate nonoiiteral aspects of communication. My mum point in this section is that there =s no reason they should incorporate them in a way directly im=tative of humans: so long as they are incorporated m a way that humans are comfortable with. direct imitation is not necessary, indeed, direct imitation iS unlikely to produce satislactory mterachon. Given the present state of natural language processing end artificial intelligence in general, there iS no prospect in the forseeable future that interlaces will be able to emulate human performance, since this depends so much on bringing to bear larger quantities of knowledge than current AI techmques are able to handle. Partial success in such emulation zs only likely to ra=se lalse expectations in the mind of the user, and when these expectations are inevitably crushed, frustration will result. However, I believe that by making use of some of the new technology ment=oned earlier, interfaces can provide very adequate substitutes for human techniques for non-literal aspects of commumcation; substitutes that capitalzze on capabilities of computers that are not possessed by humans, bul that nevertheless will result m interaction that feels very natural to a human. Before giving some examples, let tis review the kind of hardware I am assuming. The key item is a bit-map graphics display capable of being tilled with information very quickly. The screen con be divided into independent windows to which the system can direct difterent streams of OUtput independently. Windows can be moved around on the screen, overlapped, and PODDed out from under a pile of other windoWs. The user has a pointing device with which he can posit=on a cursor to arbitrary points on the SCreen, plus, of course, a traditional keyboard. Such hardware ex=sts now and will become increasingly available as powerful personal computers such as the PERO [18J or LISP machine [2] come onto the market and start to decrease in price. The examDlas of the use of such hardware which follow are drawn in part from our current experiments m user interface research {1. 7] on similar hardware. Perhaps the aspect of communication Ihal can receive the most benefit from this type of hardware is robust communication. Suppose the user types a non.grammatical input to the system which the system's flexible parser is able to recognize if. say, it inserts a word and makes a spelling correction. Going by human convention the system would either have to ask the user to confirm exDlicdly if its correction was correct, tO cleverly incorDoram ~tS assumption into its next output, or just tO aaaume the correction without comment. Our hypothetical system has another option: it Can alter what the user just typed (possibly highlighting the words that it changed). This achieves the same effect as the second optiert above, but subst=tutes a technological trick for huma intelligencf' Again. if the user names a person, say "Smith", in a context where the system knows about several Smiths with different first names, the human oot=ons are either to incorporate a list of the names into a sentence (which becomes unwmldy when there are many more than three alternatives) or to ask Ior the first name without giving alternatives. A third alternative, possible only in this new technology, is to set up 8 window on the screen 72 with an initial piece of text followed by a list ol alternatives (twenty can be handled quite naturally this way). The user is then free to point at the alternative he intends, a much simpler and more natural alternative than typing the name. although there is no reason why this input mode should not be available as well in case the user prefers it. As mentioned in the previous section, contextually based interpretation is important in human conversation because at the economies of expression it allows. There is no need for such economy in an interface's output, but the human tendency to economy in this matter is somelhing that technology cannot change. The general problem of keeping track of focus of attention in a conversation is a dillicult one (see, for example, Grosz 161 and Sidner [221), but the type ol interface we are discussing can at least provide a helpful framework in which the current locus ol attention can be made explicit. Different loci at attention can be associated with different windows on tile screen, and the system can indicate what it thinks iS Ihe current lOCUS of .nttention by, say, making the border of the corresponding window dilferent from nil the rest. Suppose in the previous example IIlat at the time the system displays the alternative Smiths. the user decides that he needs some other information before he can make a selection. He might ask Ior this information in a typed request, at which point the system would set up a new window, make it the focused window, and display the requested information in it. At this point, the user could input requests to refine the new information, and any anaphora or ellipsis he used would be handled in the appropriate context. Representing.contexts explicitly with an indication of what the system thinks is the current one can also prevent confusion. The system should try to follow a user's shifts of focus automatically, as in the above example. However, we cannot expect a system of limited understanding always to track focus shifts correctly, and so it is necessary for the system to give explicit feedback on what it thinks the shift was. Naturally, this implies that the user should be able to change focus explicitly as well as implicitly (probably by pointing to the appropriate window). Explicit representation of loci can also be used to bolster a human's limited ability to keep track of several independent contexts. In the example above, it would not have been hard lot the user to remember why he asked for the additional information and to return and make the selection alter he had received that information. With many more than two contexts, however, people quickly lose track of where they are and what they are doing. Explicit representation of all the possibly active tasks or contexts can help a user keep things straight. All the examples of how sophisticated interface hardware can help provide non-literal aspects of communication have depended on the ability of the underlying system to produce pos~bly large volumes of output rapidly at arbitrary points on the screen. In effect, this allows the system multiple output channels independent of the user's typed input, which can still be echoed even while the system is producing other output, Potentially, this frees interaction over such an interface from any turn-taking discipline. In practice, some will probably be needed to avoid confusing the user with too many things going on at once, but it can probably be looser than that found in human conversations. As a final point, I should stress that natural language capability is still extremely valuable for such an interface. While pointing input is extremely fast and natural when the object or operation that the user wishes tO identify is on the screen, it obviously cannot be used when the information is not there. Hierarchical menu systems, in which the selection of one item in a menu results in the display of another more detailed menu, can deal with this problem to some extent, but the descriptive power and conceptual operators ol nalural language (or an artificial language with s=milar characteristics) provide greater flexit)ility and range of expression. II the range oI options =.~ larg~;, t)ul w,dl (tiscr,nm;de(I, il =s (llh.~l easier to specify a selection by description than by pointing, no matter how ctevedy tile options are organized. 4. Conclusion In this paper, 1 have taken the position that natural language interfaces to computer systems will never be truly natural until they include non-literal as web as literal aspects of communication. Further, I claimed that in the light of the new technology of powerful personal computers with integral graphics displays, the best way to incorporate these non-literal aspects was nol to imitate human conversational patterns as closely as possible, but to use the technology in innovative ways to perform the same function as the non-literal aspects of communication found in human conversation. In any case, I believe the old-style natural language interfaces in which the user and system take turns to type on a single scroll of paper (or scrolled display screen) are doomed. The new technology can be used, in ways similar to those outlined above, to provide very convenient and attractive interfaces that do not deal with natural language. The advantages of this type ol interface will so dominate those associated with the old-style natural language interfaces that continued work in that area will become ol academic interest only. That is the challenge posed by the new technology for natural language interfaces, but it also holds a promise. The promise is that a combination of natural language techniques with the new technology will result in interfaces that will be truly natural, flexible, and graceful in their interaction. The multiple channels of information flow provided by the new technology can be used to circumvent many of the areas where it is very hard to give computers the intelligence and knowledge to perform as well as humans. In short, the way forward for natural language interfaces is not to strive for closer, but still highly imperfect, imitation of human behaviour, but tO combine the strengths of the new technology with the great human ability to adapt to communication environments which are novel but adequate for their needs. References 1. Ball, J. E. and Hayes, P. J. Representation of Task-independent Knowledge in a Gracefully Interacting User Interface, Tech. Rept., Carnegie-Mellon UniverSity Computer Science Department, 1980. 2. Bawden. A, et al. Lisp Machine Project Report. AIM 444, MIT AI Lab, Cambridge, Mass., August, 1977. 3. Carey, J. "A Primer on Interactive Television." J. University Film Assoc. XXX, 2 (1978), 35-39. 4. Charniak, E. C. Toward a Model of Children's Story Comprehension. TR-266, MIT AI Lab, Cambridge, Mass., 1972. 5. Cullingford. R. Script Application: Computer Understanding of Newspaper Stories. Ph.D. Th., Computer Science Dept., Yale University, 1978. 6. Grosz, B. J. The Representation and Use of Focus in a System for Understanding Dialogues. Proc. Fifth Int. Jr. Conf. on Artificial Intelligence, MIT, 1977, pp. 67-76. 7. Hayes, P. J. and Mouradian, G. V. Flexible Parsing. Proc. of 18th Annual Meeting of the ASSOC. for Comput. Ling., Philadelphia, June, 1980. 8. Hayes, P. J., and Reddy, R. Graceful Interaction in Man-Machine Communication. Proc. Sixth Int. Jr. Conf. on Artificial Intelligence, Tokyo, 1979, pp. 372-374. 9. Hayes, P. J., and Reddy, R. An Anatomy of Graceful Interaction in Man-Machine Communication. Tech. report, Computer Science Department, Carnegie-Mellon University, 1979. 73 10. Hayes-Roth, F., Erman, L. D.. Fox. M., and Mostow, D. J. Syntactic Processing in HEARSAY-H Speech Understanding Systems. Summary Of Results at the Five-Year Research Effort at Carnegie-Mellon University, Carnegie-Mellon Universdy Computer Science Department, 1976. 11. Hendr=x, G. G. Human Engineering for Applied Natural Language Processing Proc. Fifth Int Jr. Conl. on Artificial Intelligence, MIT, 1977, DD. 183-191. 1 2. Hiltz, S. R. Johnson. K.. Aronovitch, C., and Turoft. M. Face to Face vs. Computerized Conterences: A Controlled Experiment. unpublished mss. 13. Hobbs. J. R. ConversuhOn as Planned Behavior. Technical Note 203. Artificial Intelligence Center, SRi International, Menlo Park, Ca.. 1979. 14. KaDlan. S.J. Cooperative Responses Irorn a PortaDie Natural Language Data Base Query System. Ph.D. Th.. Dept. of Computer and. Inlormation Science. Univers, ty o! Pennsylvania. Philadelphia. 1979. 15. Kwasny. S. C. and Sondheimer. N. K. Ungrammaticatity and Extra-GrammatJcality in Natural Language Understanding Systems. Pro¢. of 17th Annual Meeting of the Assoc. tot Comgut. Ling.. La Jolla. Ca.. August. 1979. I~P. 19-23. 16. Levin. J. A.. and Moore. J. A. "Dialogue Games: Meta-Commun=cation Structures for Natural Language Understanding." Cognitive Scmnce 1.4 (1977). 395-420. 17. Parkison. R. C.. Colby. K. M.. and Faught. W.S. "Conversational Language Comprehension Using Integrated Pattern-Matching and Parsing." Art#icaal Intelligence 9 (1977). 111-134. 18. PERQ. Three Rivers Computer Corl~.. 160 N. Craig St.. Pittsburgh. PA 15213.. 19. Sacks. H.. Schegloff. E. A.. and Jefferson. G. "A Siml~t Semantics for the Organization of Turn-Taking tar Conversation." Language 50.4 (1974). 696-735. 20. Schegloff. E. A.. Jefferson. G.. and Sacks. H. "The Preference for Self-Correction in the Organization of Repair in Conversation." Language 53.2 (1977). 361-382. 21. Sidner. C. L. A ProgreSS Report on the Discourse and Reference Components of PAL. A. I. Memo. 468. MIT A. I. Lab.. 1978. 22. Sidner. C. L. Towards a Computational Theory of Definite Anaphore Comprehension in English Discourse. TR 537. MIT AI Lab. Cambridge. Mass.. 1979. 23. Thacker~ C.P.. McCreight. E.M. Lamgson. B.W.. Sproull. R.F.. and Boggs. D.R. Alto: A Dersonal computer, in Computer Structures: Readings ancf Examples. McGraw-Hill. 1980. Edited by D. S~ewiorek. C.Go Bell. and A. Newell. second edition, in press. 24. Thomas, J. C. "A Design-Interpretation of Natural English with Applications to Man-Computer In|erection." Int. J. Man.Machine Studies t0 (1978). 651-668. 25. Welschedel. R M. and Black. J. Responding Io Potentially Unparseable Sentences. Tech Rapt. 79/3. Dept. of Computer and Intormatlon Sciences. Universaty o! Delaware. 1979. 26. Woods. W. A.. Bates. M.. Brown. G.. Bruce. B.. Cook. C.. Klovsted. J., Makhoul. J.. Nash-Webber, B.. Schwartz. R.. Wall, J.. and Zue, V. Speech Understanding Systems - Final Technical Report. Tech. Rept. 3438. Bolt, Beranek. and Newman, Inc., 1976. 74
1980
19
UNDERSTANDING SCENE DESCRIPTIONS AS EVg~NT SIMULATIONS I David L. Waltz University of Illinois at Urbana-Champaign The language of scene descriptions 2 must allow a hearer to build structures of schemas similar (to some level of detail) to those the speaker has built via perceptual processes. The understanding process in general requires a hearer to create and run "event ~ " to check the consistency and plausibility of a "picture" constructed from a speaker's description. A speaker must also run similar event simulations on his own descriptions in order to be able to judge when the hearer has been given sufficient information to construct an appropriate "picture", and to be able to respond appropriately to the heater's questions about or responses to the scene description. In this paper I explore some simple scene, description examples in which a hearer must make judgements involving reasoning about scenes, space, common-sense physics, cause-effect relationships, etc. While I propose some mechanisms for dealing with such scene descriptions, my primary concern at this time is tO flesh out our understanding of just what the mechanisms must accomplish: what information will be available to them and what information must be found or generated to account for the inferences we know are actually made. 1. THE PROBLEM AREA An entity (human or computer) that could be said to fully understand scene descriptions would have to have a broad range of abilities. For example, it would have to be able to make predictions about likely futures; to judge certain scene descriptions to be implausible or impossible; to point to items in a scene, given a description of the scene; and to say whether or not a scene description corresponded to a given scene experienced through other sensory modes. 3 In general, then, the entity would have to have a sensory system that it could use to generate scene representations to be compared with scene representations it had generated on the basis of natural language input. In this paper I concentrate on I) the problems of making appropriate predictions and inferences about described scenes, and 2) the problem of judging when scene descriptions are physically implausible or impossible. I do not consider directly problems that would require a vision system, problems such as deciding whether a linguistic scene description is appropriate for a perceived scene, or generating lingulstic scene descriptions from visual input, or learning scene description lar4uage through experience. I also do not consider speech act aspects of scene descriptions in much detail here. I believe that the principles of speech acts transcend topics of language; I am not convinced that the study of scene descriptions would lead to major insights into speech acts that couldn't be as well gained through the study of language in other domains. IThis work was supported Ln part oy the Office of Naval Research under Contract ONR-NO0014-75-C-0612 with the University of Illinois, and was supported in part by the Advanced Research Projects Agency of the Department of Defense and monitored by ONR under Contract No. N0001~-77-C-O378 with Bolt Beranek and Newman Inc. 2The term "scene" is intended to coyer both static scenes and dynamic scenes (or events) that are bounded in space and time. 3In general ! believe that many of the event simulation procedures ought to involve kinesthetic and tactile information. I by no means intend the simulations to be only visual, although we have explored the A1 aspects of vision far more than those of any other senses. I do believe, however, that the study of scene descriptions has a considerable bearing on other areas of language analysis, including syntax, semantics, and pragmatics. For example, consider the following sentences: ($I) I saw the man on the hill with my own eyes. (32) I saw the man on the hill with a telescope. ($3) I saw the man on the hill with a red ski mask. The well-known sentence $2 is truly ambiguous, but $I and $3, while likely to be treated as syntactically similar to $2 by current parsers, are each relatively unambiguous; I would like to be able to explain how a system can choose the appropriate parsings in these cases, as well as how a sequence of sentences can add constraints to a single scene-centered representation, and aid in disamDiguation. For example, if given the pair of sentences: ($2) I saw the man on the hill with a telescope. ($4) I cleaned the lens to get a better view of him. a language understanding system should be able to select the appropriate reading of $2. I would also like to explore mechanisms that would be appropriate for judging that ($5) My dachshund bit our mailman on the ear. requires an explanation (dachshunds could not jump high enough to reach a mailman's ear, and there is no way to choose between possible scenarios which would get the dachsund high enough or the mailman low enough for the biting to take place). The mechanisms must also be able to judge that the sentences: ($6) My doberman bit our mailman on the ear. ($7) My dachshund bit our gardener on the ear. ($8) My dachshund bit our mailman on the leg. do not require explanations. A few words about the importance of explanation are in order here. If a program could judge correctly which scene descriptions were plausible and wnich were no5, but could not explain why it made the judgements it did, I think I would feel profoundly dissatisfied with and suspicious of the program as a model of language comprehension. A program ought to consider the "right options" and decide among them for the "right reasons"a if it is to be taken seriously as a model of cognition. ! will argue that scene descriptions are often most naturally represented by structures which are, at least in part, only awkwardly viewed as propositional; such representations include coordinate systems, trajectories, and event-simulating mechanisms, i.e. procedures w~ich set up models of objects, interactions, and constraints, "set them in motion", and "watch what happens". I suggest that event simulations are supported by mechanisms that model common-sense physics and human behavior I will also argue that there is no way to put limits on the degree of detail which may have to be considered in constructing event simulations; virtually any feature of an object can in the right circumstances become centrally important. 4An explanation need not be in natural language; for example, I probably could be convinced via traces of a program's operation that it had been concerned with the right issues in judging scene plausibility. 2. THE NATURE OF SCENE DESCRIPTIONS I have found it useful to distinguish between static and dynamic scene descriptions. Static scene descriptions express spatial relations or actions in progress, as in: ($9) The pencil is on the desk. ($I0) A helicopter is flying overhead. ($11) My dachshund was biting the mailman. Sequences of sentences can also be used to specify a single static scene description, a process I will refer to as "detail addition". As an example of detail addition, consider the following sequence of sentences (taken from Waltz & Bog~ess [I]): ($12) A goldfish is in a fish bowl. (313) The fish bowl is on a stand. (S14)'The stand is on a desk. ($15) The desk is in a room. A program written by BoKEess [2] is able to build a representation of these sentences by assigning to each object mentioned a size, position, and orientation in a coordinate system, as illustrated in figure I. I will refer to such representations as "spatial analog models" (in [I] they were called "visual analog models"). Objects in BogEesa's program are defined by giving values for their typical values of size, weight, orientation, surfaces capable of supporting other objects, as well as other properties such as "hollow" or "solid", and SO on. Fi~e I A "visual analog model" of $12-$15. Dynamic scene descriptions can use detail addition also, but more co-,-only they use either the mechanisms of "successive refinement" [3] or "temporal addition". "Temporal addition" refers to the process of describin 6 events through a series of tlme-ordered static scene descriptions, as in: ($16) Our mailman fell while running from our dachshund. ($17) The dachshund bit the mailman on the ear. "Successive refinement" refers to a process where an introductory sentence sets up a more or less prototyplcal event which is then modified by succeeding sentences, e.g. by listing exceptions to one's ordinary expectations of the prototype, or by providing specific values for optional items in he prototype, or by similar means. The following sentences provide an example of "successive refinement": ($18) A car hit a boy near cur house. ($19) The car was speeding east~ard on Main Street ~t the time. ($20) The boy, ~ was riding a bicycle, was knocked to th~ ~round. 3. THE GOALS OF A SCENE UNDERSTANDING SYSTEM What should a scene description understanding system to do with a linguistic scene description? Basically I) verify plausIDillty, 2) make inferences and predictions, 3) act if action is called for, and a) remember whatever is important. For the time being, I am only considering I) and 2) in detail. In order to carry out I) and 2), I would llke my system to turn scene descriptions (statiu or dynamic) into a time sequence of "expanded spatial analog models", where each expanded spatial analog model represents either I) a set of spatial relationships (as in $12-$15), or 2) spatial relationships plus models of actions in progress, chosen from a fairly large set of primitive actions (see below), or 3) prototypical actions that can stand for sequences of primitive actions. These prototypical actions would have to be fitted into the current context, and modified according to the dictates of the objects and modifiers that were supplied in the scene description. The action prototype would have associated selection restrictions for objects; if the objects in the scene description matched the selection restrictions, then there would be no need to expand the prototype into primitives, and the "before" and "after" scenes (similar to pro- and post-condltions) of the action prototype could be used safely. If the selection restrictions were violated by objects in the scene, or if modifiers were present, or if the context did not match the preconditions, then it would have to be possible to adapt the action prototype "appropriately". It would also have to be possible to reason abOut the action without actually running the event simulation sequence underlying it in its entirety; sections that would have to be modified, plus before and after models, might be the only portions of the simulation actually run. The rest of the prototype could be treated as a kind of "black box" with known input-output characteristics. I have not yet fotmd a principled way to enumerate the primitives mentioned above, but I believe that there should be many of them, and that they should not necessarily be non-overlapplng; what is most important is that they should have precise representations in spatial analog models, and be capable of being used to generate plausible candidates for succeding spatial analog models. Some examples of primitives I have looked at and expect to include are: brea~-object-lnto-parta, mechanlcally-join-parts, hit, tough, support, translate, fall. As an example of the expansion of a non-primitive action into primitive actions, consider "bite x y"; its steps are: 1)[set-up] instantlate x ~ as a "biting-thing" - - defaults = mouth, teeth, jaws of an animate entity; 2) instantiate y as "thlng-bitten"; 3)[before] x is open and does not touch y and x partially surrounds y (i.e. y is not totally Inside x); ~) x is closing on y; 5)[actlon] x is touching y, preferably in two places on opposite sides of y and x continues to close; 6) x deforms y; 7)falter] x is moving away from y, and no longer touches y. Finally, lest it should not ~e clear from the sketchiness of the comaents above, I am by no means satisfied yet with these ideas as an explanation of scene description understanding, although I am confident that this research is headed in the right general direction. 4. PLAUSIBILITY JUDGEMENT The basic argument I am advancing in this paper is this: it is essential in understandlng scene descriptions to set up and run event simulations for the scenes; we judge the plausibility (or possiDility), meaningfulness, and completeness of a description on the basis of our experience in attempting to set up and run the simulation. By studying cases where we judge descriptions to be implausible we can gain insight into Just what is done routinely dm'ing the understanding of scene descriptions, since these cases correspond to failures in setting up or running event simulations. 5By "instantiate an X" I mean assign X a physical place, posture, orientation, etc. or retrieve a pointer to sv~h an instantiation, if it is a familiar one. Th 3 "instantiate a ~aby" would retrieve a pointer, w~ereaa "instantiate a two-neaded dog" would proPaPly have to attempt to generate one on the spot. Note that this process may itself fail, i.e. that an entity may not be able to "imagine" such an object. As the examples below illustrate, sometimes an event simulation simply cannot be set up because information is missing, or several possible "pictures" are equally plausible, or the objects and actions being described cannot be fitted together for a variety of reasons, or the results of running the simulation do not match our knowledge of the world or the following portions of the scene description, and so on. It is also important to empbaclze that our ultimate interest is in being able to succeed in setting up and running event simulations; therefore I have for the most part chosen ambiguous examples where at least one event slmuiation succeeds. 4.1 TRANSLATING AN OLD EXAMPLE INTO NEW MECHANISMS Consider Bar-Hillel's famous sentence [4]: 6 ($I0) The box is in the pen. Plausibility Judgement is necessary to choose the appropriate reading, i.e. that "pen" = playpen. Minor extensions to Boggess's program could allow it to choose • the appropriate referent for pen. Penl (the writing implement) would be defined as having a relatively fixed size (subject to being overridden by modifiers, as in "tiny pen" or "twelve inch pen"), but the size of cen2 (the enclosure) would be allowed to vary over a range of values (as would the size of box). The program could attempt to model the sentence by instantlatlng standard (default-sized) models of box, penl, and pen2, and attempting to assign the objects to positions in a coordinate system such that the box would be in peril or pen2. Pen; could not take part in such a spatial analog model both because of pen1's rigid size, and the extreme shrinkage that would be required of box (outside box's allowed range) to make it smaller than the pen;, and also because pen; is not a container (i.e. hollow object). Pen2 and box prototypes could be fitted together without problems, and could thus be chosen as the most appropriate interpretation. 4.2 A SIMPLE EVENT SIMULATION Extending Boggess's program to deal with most of the other examples given in this paper so far would be harder, although I believe that $I-$4 could be handled without too much difficulty. Let us look at $2 and S~ in more detail: ($2) I saw the man on the hill with a telescope. ($4) I cleaned the lens to get a better view of him. After being told $2, a system would either pick one of the possible interpretations as most plausible, or it might be unable to choose between competing interpretations, and keep them both. When it is told $4, the system must first discover that "the lens" is part of the telescope. Having done this, $4 unambiguously forces the placement of the speaker to be close enough to the telescope to touch it. This is because all common interpretations of clean require the agent to be close to the object. At least two possible interpretations still remain: I) the speaker is distant from the man on the hill, and is using the telescope to view the man; or 2) the speaker, telescope, and man on the hill are all close together. The phrase "to get a better view of him" refers to the actions of the speaker in viewing the man, and thus makes interpretation I) much more likely, but 2) is still conceivable. The reasoning necessary to choose I) as most plausible is rather subtle, involving the idea that telescopes are usually used to look at distant objects. In any case, the proposed mechanisms should allow a system to discard an interpretatllon of $2 and S~ where the man on the hill had a telescope and was distant from the speaker. 6A central figure in the machine translation effort of the late 5O's and early 6O's, Bar-Hillel cited this sentence in explaining why machine translation was impossible. He subsequently quit the field. 4.3 SIMULATING AN IMPLAUSIBLE EVENT Let us also look again at $5: ($5) My dachshund bit our mailman on the ear. and be more specific about what an event simulation should involve in this rather complex case. The event simulation set up procedures I envision would.execute the following steps: I) instantiate a standard mailman and dachshund in default positions (e.g. both standing on level ground outdoors on a residential street with no special props other than the mailman's uniform and mailbag); 2) analyze the preconditions for "bite" to find that they require the dog's mouth to surround the mailman's ear; 3) see whether the dachshund's mouth can reach the mailman's ear directly (no); ~) see whether the dog can stretch high enough to reach (no; this test would require an articulated model of the dog's skeleton or a prototypical representation of a dog on its hind legs.); 5) see whether a dachshund could jump high enough (no; tbls step is decidedly non-trivial to implement!" ); 6) see whether the mailman ordinarily gets into any positions w~ere the dog could reach his ear (no); 7) conclude that the mailman could not be bitten as stated unless default sizes or movement ranges are relaxed in some way. Since there is no clearly preferred way to relax the defaults, more information is necessary to make this an "unambiguous" description. I have quoted "unambiguous" because the sentence $5 is not ambiguous in any ordinary sense, lexically or structurally. What is ambiguous are the conditions and actions whlch could have led up to $5. Strangely enough, the ordinary actions of mailmen (checked in step 6) seem relevant to the judgement of plausibility in this sentence. As evidence for this analysis, note that the substitution of "gardener" for "mailman" turns ($5) into a sentence that can be simulated without problems. I think that it is significant that such peripheral factors can be influential in Judging the plausibility of an event. At the same time, I am aware that the effect in this case is rather weak, that people can accept this sentence without noting any strangeness, so I do not want to draw conclusions that are too strong. ~.4 MAKING INFERENCES ABOUT SCENES Consider the following passage: (91) YOU are at one end of a vast hall stretching forward out of sight to the west. There are openings to either side. Nearby, a wide stone staircase leads downward. The hall is filled with wisps of white mist swaying to and fro almost as if alive. A cold wind blows up the staircase. There is a passage at the top of the dome behind you. Rough stone steps lead up the d~e. Given this passage (taken from the computer game "Adventure") one can infer that it is possible to move to the west, north, south, or east (up the rough stone steps). Note that this information is buried in the description; in order to infer this information, it would be useful to construct a spatial analog model, TAltbough one could do it by simply including in the definition of a dog information about how high a dog can Jump, e.g. no higher than twice the dog's length. However I consider tbls something of a "hack", because it iKnores some other problems, for example the timing problem a dog would face in biting a small target like a person's ear at the apex of its highest jump. I would prefer a solution that could, if necessary, perform an event simulation for step 5), rather than trust canned data. with "you" facing west, and the scene features placed appropriately. In playing Adventure, it is also necessary to remember salient features of the scenes described so that one can reoo@~Lize the same room later, given a passage such as: (P2) You're in hall of mists. Rough stone steps lead up the dome. There is a threatening little dwarf in the room with you. Adventure can only accept a very limited class of co-v, ands from a player at any given point in the game. It is only possible to play the game because one can make reasonable inferences about what actions are possible at a given point, i.e. take an object, move in s~e direction, throw a knife, open a door, etc. While I am not quite sure what make of my observations about this example, I think that games such as Adventure are potentially valuable tools for gathering information about the kinds of spatial and other inferences people make about scene descriptions. 4.5 MIRACLES AND WORLD RECORDS With some sentences there may be no plausible interpretation at all. In many of the examples which follow, it seems unlikely that we actually generate (at least consciously) an event simulation. Rather it seems that we have some shortcuts for recognizing that certain events would have to be termed "miraculous" or difficult to believe. (32..2,) My car goes 2000 miles on a tank of gas. (323) Mary caught the bullet between her teeth. ($24) The child fell from the 10th story window to the street below, but wasn't hurt. (325) We took the refrigerator home in the trunk of our VW Beetle. ($26) She ~md given birth to 25 children by the age of 30. (527) The robin picked up the hook and flew away with it. (328) The child chewed up and swallowed the pair of scissors. The Gulnness Book of World Records is full of examples that defy event simulation. How one is able to Judge the plausibility of tsese (and how we ml~ht get a system to do so) remains s~methl~ of a mystery to me. The problem of recognizing obviously implausible events rapidly is an important one to consider for dealing with pronouns. Often we choose the appropriate referent for a pronoun because only one of the possible referents could be part of a plausible event if substituted for the pronoun. For example, "it" must refer to "milk", not "baby", in 329: ($29) I didn't want the baby to get sick from drinking the milk, so I boiled it. 5. T~ ROLK OF EVKNT SIMULATION IN A FULu T~ORY OF LA.CUAC~ I suggested in section 3 that a scene description understanding system would have to 1) verify the plausibility of a described scene, 2) make inferences or predlction~ about the scene, 3) act if action is called for, and ~) remember whatever is important. As pointed out in section ~.5, event simulations may not even be need for all cases of plausibility judgement. Furthermore, scene descriptions constitute only one of many possible topics of language. Nonetheless, I feel that the study of event simulation is extremely important. 5.1 WHY ARE SIMPLE PHYSICAL SCENES WORTH CONSIDERING? For a number of reasons, methodological as well as theoretical, I believe that it is not only worthwhile, but also important to begin the study of scene descriptions with the world of simple physleai objects, events, and physical behaviors with simple goals. I) Methodologically it is necessary to pick an area of concentration which is restricted in some way. The world of simple physical objects and events is one of the simplest worlds that links language and sensory descriptions. 2) As argued in the work of Piaget [5], it seems likely that we come to comprehend the world by first mastering the sensory/motor world, and then by adapting and building on our schemata from the sensory/motor world to understand progressively more abstract worlds. In the area of language Jackendoff [6] offers parallel arg,~eents. Thus the world of simple physical objects and behaviors has a privileged positions in the development of cognition and language. 3) Few words in English are reserved for describing the abstract world only. Most abstract words also have a physical meaning. In some cases the physical meanings may provide important metaphors for understanding the abstract world, w~ile in other cases the same mechanisms that are used in the interpretation of the physical world may be shared with mechanisms that interpret the abstract world. 4) I would llke the representations I develop for linguistic scene descriptions to be compatible with representations I can imagine generating with a vision system. Thus this work does have an indirect bearing on vision research: my representations characterize and put constraints on the types and forms of information I think a vision system o~nt to be able to supply. 5) Even in the physical domain, we must come to grips with some processes that resemble those involved in the generation and understanding of metaphor: matching, adaptation of schemata, ~diflcation of stereotypical items to match actual items, and the interpretation of items from different perspectives. 5.2 SCENE D~SCRIPTIONS AND A THEORY OF ACTION I take it as evident that every scene description, indeed every utterance, is associated with some purpose or goal of a speaker. The speaker's purpose affects the organization and order of the speaker's presentation, the items included and the items omitted, as well as word choice and stress. Any two witnesses of the same event will in general give accounts of it that differ on every level, especially if one or both witnesses were participants or ~as some special interest in the cause or outcome of the event. For now I have ignored all these factOrS of scene description understanding; I have not attempted an account of the deciphering of a speaker's goals or biases from a given scene description. I have instead considered only the propositional content of scene description utterances, in particular the issue' of whether or not a given scene description could plausibly correspond to a real scene. Until we can give an account of the Judgement of plausibility of description meanings, we cannot even say now we recognize blatant lles; from this perspective, understanding ~ someone might lle or mislead, i.e. understanding the intended effect of an utterance, is a secondary issue. There seems to me to be a clear need for a "theory of human action", both for purposes of event simulation and, more importantly, to provide a better overall framework for AI research than we currently nave. While no one to my knowledge still accepts as plausible the "big switch" theory of intelligent action [7], mos~ AI work seems to proceed on the "big switch" ass,,mptions that it is valid to study intelligent behavior in isolated domains, and that there is no compelling reason at this point to worry a~out whether (let alone how) the pieces developed in isolation will ultimately fit together. 5.3 ARE THERE MANY WAYS TO SKIN % CAT? Spatial analog models are certainly not the only possible representation for scene descriptions, hut they are convenient and natural in many ways. Among their advantages are: I) computational adequacy for 10 representing the locations and motions of objects; 2) the ability to implicitly represent relationships between objects, and to allow easy derivation of these relationships; 3) ease of interaction with a vision system, and ultimately appropriateness for allowing a mobile entity to navlgate and locate objects. The main problem with these representations is that scene descriptions are usually underspeclfled, so that there is a range of possible locations for each object. It thus becomes risky to trust implicit relationships between objects. Event stereotypes are probably important because they specify compactly all the important relationships between objects. 5.~ RELATED WORK A number of papers related the the topics treated here have appeared in recent years. Many are listed in [8] which also provides some ideas on the generation of scene descriptions. This work has been pervasively influenced by the ideas of Bill Woods on "procedural semantics", especially as presented in [9]. Representations for large-scale space (paths, maps, etc.) were treated in Kuipers' thesis [I0]. Novak [11] wrote a program that generated and used diagrams for understanding physics problems. Simmons [12] wrote programs that understood simple scene descriptions involving several known objects. Inferences about the causes and effects of actions and events have been considered by Schank and Abelson[13] and Rieger[14]. Johnson-Laird[15] has investigated problems in understanding scenes with spatial locative prepositions, as has Herskovits[16]. Recent work by Forbus[17] has developed a very interesting paradigm for qualitative reasoning in physics, built on work by deKleer[18,19], and related to work by Hayes[20,21]. My comments on pronoun resolution are in the same spirit as Hobbs[22], although Hobbs's "predicate interpretation" is quite different from my "analog spatial models". Ideas on the adaptation of prototypes for the representation of 3-D shape were explored in Waltz [23]. A effort toward qualitative mechanics is described in Bundy [24]. Also relevant is the work on mental imagery of Kosslyn & Shwartz[25] and Hinton[26]. I would like to acknowledge especially the helpful comments of Ken Forbus, and also the help I have received from Bill Woods, Candy Sidner, Jeff Gibbons, Rusty Bobrow, David Israel, and Brad Goodman. 6. REFERENCES [I] Waltz, D.L. and Boggess, L.C. Visual Analog representations for natural language understanding. Prec. of IJCAI-79. Tokyo, Japan, Aug. 1979. [2] Boggess, L.C. Computational interpretation of ~nglish spatial prepositions. Unpublished Ph.D. dissertation, Computer Science Dept., University of Illinois, Urbana, 1978. [3] Chafe, W.L. The flow of thought and the flow of language. In T.Glvon (ed.) Discourse and Syntax. Academic Press, New York, 1979. [~] Bar-Hillel, Y. Lsun~ua~e and Information. Addison-Wesley, New York, 1964. [5] Piaget, J. Six Psvcholo~ieal ~udies. Vintage Books, New York, 1967. [6] Jackendoff, R. Toward an explanatory semantic representation. " " L 1, 89-150, 1975. [7] Minsky, M. and Papert, S. Artificial Intelli=ence, Project MAC report, 1971. [8] Waltz, D.L. Generating and understanding scene descriptions. In Josbi, Sag, and Webber (e de.) Elements of Discourse Understanding, Cambridege University Press, to appear. Also Working paper 24, Coordinated Science Lab, Univ. of Illinois, Urbana Feb. 1980. [9] Woods, W.A. Procedural semantics as a theory of meaning In Joshl, Sag, and Webber (eds.) Discourse Understsndln~. Cambridge University Press, to appear. [I0] Kulpers, B.J. Representing knowledge of large-scale space. Tech. Rpt. AI-TR-418, MIT AI Lab, Cambridge, MA, 1977. [11] Novak, G.S. Computer understanding of physics problems stated in natural language. Tech. Rpt. NL-30, Dept. of Computer Science, University of Texas, Austin, 1976. [12] Simmons, R.F. The CLOWNS microworld. In Schank and Nash-Webber (eds.) Theoretical Issues in Natural Langtu~=e Processing, ACL, Arlington, VA, 1975. [13] Scbank, B.C. and Abelson, R. ScriPts. Plans. Goals. and Understandin=. Lawrence Erlbaum Associates, Hillsdale, NJ, 1977. [14] Rieger, C. The commonsense algorithm as a basis for computer models of human memory, inference, belief and contextual language comprehension. In Scbank and Nash-Webber (eds.) Theoretical Issues in Natural Language Processing. ACL, Arlington, VA, ~975. [15] Johnson-Laird, P.N. Mental models in cognitive science. CQ~nitive Science ~ I, 71-115, Jan.-Mar. 1980. [16] Herskovitz, A. On the spatial uses of prepositions. In this proceedings. [17] Forbua, K.D. A study of qualitative and geometric knowledge in reasoning about motion. MS thesis, MIT AI Lab, Cambridge, MA, Feb. 1980. [18] de Kleer, J. Multiple representations of knowledge in a mechanlcs problem-solver. Prec. 5tb Intl. Joint ~onf. on Artificial Intelli~ence~ MIT, Cambridge, MA, 1977, 299-304. [19] de Kleer, J. The origin and resolution of ambiguities in causal arguments. Prec. IJCAI-79, Tokyo, Japan, 1979, 197-203. [20 ] Hayes, P.J. The naive physics manifesto. Unpublished paper, May 1978. [21] Hayes, P.J. Naive physics I: Ontology for liquids. Unpublished paper, Aug. 1978. [22] Hobbs, J.R. Pronoun resolution. Research report, Dept. of Computer Sciences, City College, City University of New York, c.1976. [23] Waltz, D.L. Relating images, concepts, and words. Prec. of the NSF WorMshoo on the RePresentation of ~-O Oblects, University of Pennsylvania, Philadelphia, 1979. Also available as Working Paper 23, Coordinated Science Lab, University of Illinois, Urbana, Feb. 1980. [24] Bundy, A. Will it reach the top? Prediction in the mechanics world. Artificial Intelli~ence 10. 2, April 1978. [25] Kossly~, S.H. & Shwartz, S.P. A simulation of visual imagery. CQ~nitive Science I, 3, July 1977. [26] Hinton, G. Some demonstrations of the effects of structural descriptions in mental imagery. Co=nitive Science ~, 3, July-Sept. 1979.
1980
2
THE PROCESS OF COMMUNICATION IN FACE TO FACE VS. COMPUTERIZED CONFERENCES; A CONTROTT.~n EXPERIMENT USING BALES INTERACTION PROCESS ANALYSIS Start Roxanne Kiltz, Kenneth Johnson, and Ann Marie Rabke Upsala College INTRODUCTION A computerized conference (CC) is a form of co~znunica- tion in which participants type into and read frc~ a computer terminal. The participants may be on line at the same time--termed a "synchrononous" conference, or may interact anynchronous~. The conversation is stored and mediated by the computer. How does this form of communication change the process and outcome of group discussions, as compared to the "normal" face to face (FtF) medium of group discussion, where participants communicate by talking, listening and observing non-verbal behavior, and where there is no lag between the sending and receipt of communication signals? This paper briefly ~*mmarizes the resUltS of a controlled laboratory experiment designed to quantif~ the manner in which conversation and group decision making varies between FtF and CC. Those who wish more detail are referred to the literature review which served as the basis for the design of the experiment (Hiltz, 1975) and to the full technical report on the results (Hiltz, Johnson, Aronovitch, and Turoff, 1980). This paper is excerpted from a longer paper on the analysis of communications process in the two media and their correlates (Hiltz, Johnson and Rabke, 198Q). 0v~vIEw OF mm z~na~T The chief independent variable of interest is the im- pact of computerized conferencing an a c~unications mode upon the process and outcome of group decision making, as compared to face-to-face discussions. Two different types of tasks were chosen, and group size was set at five persons. The subjects were Upsala College undergraduate, graduate and continuing educa- tion students. The communications process or profile was quantified using Bales Interaction process Analy- sis (see Bales, 1950). In computerized conferenclng, each participant is physically alone with a c~mputer terminal attached to a telephone. In order to communicate, he or she types entries into the terminal and reads entries sent by the other participants, rather than speaking and listening. Entering input and res~ttug output may be done totally at the pace end time chosen b~ each individual. Con- ceivably, for instance, all group members could be entering comments simultaneously. Receipt of messages from others is at the terminal print speed of 30 char- acters per second. Even when all five participants are on-line at the s~me time, there is considerable lag in a computer confer- ence between the time a discussant types in a co~ent, and when a response to that comment is received. First, each of the other participants must finish what they are typing at the time; then they read the waiting item; then they may type in a response; then the author of the original cou~ent must finish his or her typing of a subsequent item and print and read the response. There is thus a definite "asynchronous" quality even to "synchronous" computer conferences. As a result, computer conferences often develop several simultaneous threads of discussion that are being dis- cussed concurrently, whereas face to face discussions tend to focus oD one single topic at a time and then move on to subsequent topics. (See Hiltz and Turoff, 1978, for a complete description of CC as a mode of cummunicatlon). A variable of secondary Interest is problem type. Much experimental literature indicates that the nature Of the problem has a great deal to do with grou~ perform- ance. One type of problem that we used is the human relations case as developed by Bales. These are medium complex, unsettled problems that have no speci- fic "correct" answer. The second type was a "scienti- fic" ~-anklng problem ( requiring no specific expertise ), which has a single correct solution plus measurable de- grees of bow nearly correct a groupts answer may be. The ranking problem, "Lost in the Arctic", was adapted for ~-~etration over a conferencing system by per- mission of its originators (See Eady and Lafferty). The experiments thus had a 2 x 2 factorial design (see figure one). The factors were mode of communication (face-to-face vs. camputerlzed conference) and problem type (human relations vs. a more "scientific" ranking problem with a correct answer). These factors con- stituted the "independent variables." Each problem- mode condition included a total of eight groups. Figure 1 Design of the Experiment Two by Two Factorial with Repeated Measures: Blocks of Four Task Task Type A Type B Groups Face-to-Face 4 h Ccmguterized Conference ~ BACKGROUND: THE BALES EXPERIMENTS AND INTerACTION PROCESS ANALYSIS Working at the Laboratory of Social Relations at Har- vard, Bales and his colleagues developed a set of cate- gories and procedures for coding the interaction in small face-to-face decision-making groups which became very widely utilized and generated a great deal of data about the nature of co~unicmtion and social processes within such groups. Coding of the co~nunications interaction by Interaction Process Analysis involves noting who makes a statement or non-verbal participation (such as nodding agreement); to whom the action was addressed; and into which of twelve categories the action best fits. These cate- gorles are listed in subsequent tables and explained below. The distribution of co~z~unications units among the twelve categories constituted one of the main de- pendent variables for this experiment. We expected significant differences associated with mode of communi- cation. We also expected some differences associated with task type. We did not feel that we had enough information to predict the directions of these differ- ences. For almost every category, we could think of some arguments that would lead to a prediction that the category would be "higher" in CC, and some reasons why it might be lower. 75 METHOD The number of Bales units per face to face group was much greater than the number for acc group. There- fore, each individual and group was transformed to a percentage distribution among the ~velve categories. Then statistical zests were performed to determine if there were any significant differences in IPA distri- butions associated with mode of communication, prob- blem, order of problem, and the interaction among these variables in relation to the percentage distri- bution for each of the Bales categories. There are many different ways in which the percentages could be computed. To take full advantage of the de- sign, we cumputed the percentage distribution for each individtu~l, in each condition. Thus, we actu~S-ly have the Bales distributions for each of 80 individuals in a face to face conference, and in a computerized con- ference. The mode of analysis was a two by two factorial nested design. If there was no significant group effect, then the error terms could be "pooled", meaning we could use the 80 observations as independent obser- vations for statistical test purposes. We also per- formed a non-parametric test on the dat~ for each Bales category, which gave us similar results. DIFFERENCES ASSOCIATED WITH COmmUNICATION MODE Two of the detailed analysis of variance tables on which the st~mary here is based are included as an Appendix. Note that the analyses were first performed separately for the two problems, using c~unication mode as ~he independent variable. For each problem, we tested the significance of mode of c~unication, order (whether it was the first or second problem solved by the group), and the interaction between mode and order• Listed in figures two and three is a su~nary of the statistical results of the 24 analyses of variance which examined observed differences between communi- cation modes for each of the two rases. The first two colu~us show the mean percentage of co~nunications in each category. For example, in the first table, re- sults for Forest Ranger, the first column shows that on the average less than 1% of an individual's communi- cations were verbally "showing solidarity", but in CC, 3.22% fell into this category. The third column shows that the results for the 16 groups in the nested factor- ial design were significant at ~he .005 level, meaning that the probability of tae observed differences oc- curing by chance in a sample this size is one in 200. The fourth column shows the level of significance if the group was not a significant variable and the obser- vations could be pooled, with the 80 individuals treated as independent observations. In this case, group was significant, so the pooled analysis could not be done. In looking at these data, there is an apparent coding problem. Even for the Forest Ranger problem, face to face, we obtained a somewhat different distribution of coding than did persons coding problem discussions such as this who were directly trained by Bales. (See Bales and Borgatta, 1955, p. 400 for the complete ~ qtribu- tions). Our coding has 20% more of the statements clsssified as "giving opinions" than Bales and Borgatta code, and correspondingly lower percentages in all of the other categories. This means that our results cannot be directly compared to those of other investi- gators, since apparently ~he training for coding inter- preted many more statements as representing some sort of analysis or opinion than "should" be there, accord- ing to the distributions obtained for similar studies by Bales and his colleagues. (Other possible explana- tions are that Upsala College has produced an ~nusually opinionated and analytic set of students or that the effect of pre-experimental training in cc raises opinion giving even in subsequent FtF discussions.) It does not affect the comparisons among problems and modes for this stu~, since all of the coders were coding the data with the same guidelines and inter- pretations. In ~he majority of cases, the same pair of coders coded both the CC and FtF condition for the same group. In any case, the seven individuals who did the coding had been trained to an acceptable level of reliability. Figure 2 Summary of IPA Results for Forest Ranger by Mode of C~.-unica~ion and Order Bales Category Average P Si~mificance FTF CC By Group Pooled Shows: Solidarity .79 3.22 .005 G3 Tension Release 3.98 .83 .0005 .0005 Agreement 13.19 4.79 .0005 .0005 Gives: Suggestions 4.70 9.21 .i0 .i0 Opinion 54.21 53.92 X X Orientation 12.81 16.10 .i0 .02 Asks for: Orientation 3.27 1.58 .05 GS Opinion 2.88 5.36 .01 .01 Su@gestions .30 .62 .25 .20 Shows: Disagreement 4.85 2.39 .05 .05 Tension: .81 2.16 .05 .01 Problem Ist .28 1.68 Problem 2nd 1.33 2.64 Antagoni~: .75 1.67 X X GS • Group significant cannot pool by ind/vid~ Figure 3 Suwmary of IPA Results for Arctic by Mode of Ct~m.unication and Order Bales Category Avermge P Significance FTF CC By Group Pooled Shows: Solidarity 1.66 2.h~ .I0 .05 Tension "~lease 7.70 1.60 .0005 .0005 Agreement 13.35 6.82 .01 GS Gives: Suggestions 3.56 4.89 .20 .iO Problem ist 2.95 6.1/ Problem 2nd 4.17 3.61 Opinion 42.99 57.80 .005 G3 Orientation 14.58 11.81 .25 GS Asks for: Orientation 3.72 1.62 Opinion 5.15 7.~6 Suggestions l.lh .58 • 025 .20 X •O0O5 GS ~S 76 Shows: Disagreement 3.51 2.h6 X GS Tension: 1.52 .64 .025 .005 Antagonism: l.ll 1.86 X GS Problem let • 77 .73 Problem 2nd 1.45 3.00 GS = Group significant cannot pool by individual DISCUSSION OF THE RESULTS The twelve categories in Bales Interaction Process Analysis can be combined into four main zhlnctional areas. Categories 1-3 and 10-12 are the "social-emo- tlonal" functions, oriented towards internal group pro- cess. The first three are called "social-emotional positive", while 10-12 are "negative". Categories 7-9 are "Task oriented", giving answers or contributions to solving the problem faced by the group, and categories h-6 are varieties of "asking questions" in the task oriented area. It will be noted, by wa~- of further introduction, that there are some very strong differences in the profiles, even In the same medium, depending upon the type of task faced by the group, and that there is some inter- action between task type and medium. For example, more tension was shown in the arctic problem in the CC con- dition; more in the Forest Ranger problem in the FTF condition. We will take each of the categories, describing more fully what is included in them, and then discuss the extent to which there appear to be significant differ- ences between the media in the relative prevalence of communications of that type. We will also try to ex- plain the possible reasons for or implications of sig- nificant d/fferences that are discovered. 1. "Shows solidarity, raises other's status, gives help, reward" Included in this category are initial and responsive acts of active solidarity sad affection, such as saying "hello" and making friendly or congenial remarks to "break the ice"; praising or encouraging the other(s); giving support or sympathy or offers of assistance; urging harmony and cooperation. These are all overt attempts to improve the solidarity of the group. Note that there is a significantly greater amount of "showing solidarity" in computerized conferencing. This is probably because much of the behavior of this type in a face to face situation is non-verbal, such as smiling in a friendly manner while nodding encourage- ment. Non verbal acts in this category are not eodable from the tapes of the discussions. In the CC condition, however, the participants realize that they must put such things into words. Another possible explanation is that the greater ten- dency towards overt, explicit showing of solidarity is an attempt to compensate for the perceived coldness and impersonality of the medium. 2. "Shows Tension Release, Jokes, laughs, shows satis- faction" This includes expressions of pleasure or happiness, making friendly Jokes or kidding remarks, laughing. There was significantly more tension release overtly expressed in the face to face groups. Much of this was waves of laughter, particularly in the arctic prob- lem. The participants did not put this into words in the conference when typing. Observing them, however, there was much private laughter and verbal expressions showing "tension release", but these do not appear in the transcript. It is part of the private "letting down of face" that occurs but is not communicated thro- ugh the computer. 3. "Agrees, shows passive acceptance, understands, con- curs, complies" This occurs as concurrence in a proposed course of action or carrying out of any activity which has been requested by others. There is significantly more agreement overtly expressed in face to face confer- ences than in computerized conferences. We suspect that this is related to the pressure to conform created by non-verbal behavior and the physical presence of the other group members. In any case, it is undoubtedly related to the greater difficulty of CC groups in reaching total consensus. h. "Gives SUggestion, direction, implying autonom~ for other" Includes giving suggestions about the task or sUgges- ting concrete actions in the near term to attain a group goal. There is a tendency for more suggestions to be given by more people In computerized conferenc- ing. This is part of the equalitarian tendency for more members to actively participate in the task behav- ior of a group in CC. In one of the problems, the d/fference was statistically significant at the .05 le- vel; whereas in the other, it was sizable but did not reach statistical significance. 5. "Gives opinion, evaluation, analysis, expresses feeling, wish" Includes all reasoning or expressions of evaluation or interpretation. This is the most frequent type of co-,~unication for both problems and Both modes. For the Bales problem, there was no difference in its prevalence associated with mode of co~nuaication. For the Arctic problem, however, there ~&s a large and statisticaJ_ly significant difference, with more opinion giving in the CC condi- tion. 6. "Gives Orientation, information, repeats, clarifies, confirms t, This includes statements that are meant to secure the attention of the other, (such as "There are two points I'd like to make..."), restating or reporting the essen- tial content of what the group has read or said; non- inferential, descriptive generalizations or summaries of the sit%latlon facing the group. There are no clear dif- ferences here. Whereas there is a statistically signif- icant difference in the direction of giving more orien- tation in CC for Forest Ranger, for the other problem, the difference is reversed, 7. "Asks for orientation, information, repetition and C On i~I rmat i on '' There is a significant tendency for this to occur more often in face to face discussions. This is probably because of the frequency with which a group member does not hear or understand the pronunciation of a sentence or partial utterance. In CC, people are usually more careful to state their thoughts clearly, and the recipi- ent can read it several times rather than asking for repetition if it is not understood the first time or is later forgotten. We have noticedmany CC participants going back and looking at co~nents a second or third tim~ in a face to face discussion, they would probably ask something like: "What was it you said before about x?". 8. "Asks for opinion, evaluation, analysis, expression of feeling" ?7 This occurs more frequently in ccmpuZerized confer- encin~. For one of the problems, the difference reached statistical significance, whereas it did not for the other. ~his tendency to more frequent- ly and explicitly ask for the opinions of all the other group members, as well as to more spontane- ously offer ones own opinions and analyses in C0, does seem to qualitatively be characteristic of the me~i~. 9. "Asks for s~estion, direction, possible ways of action" This includes all over~, explicit requests, such as "What shall we do now?". It is not very preva- lent in either medi,~, and there are no significant differences. i0. "Disagrees, shows passive rejection, formal- ity, witholds resources" This includes all the milder forms of disagreement or refusal to ccaply or reciprocate. This is also an infrequent form of communicntion, but it occurs more in face to face discussions than in CC. ii. "Shows tension, asks for help, withdraws out of field" Includes indications that the subject feels -nYious or frustrated, with no particular other group mem- ber as the focus of these negative feelings. The results on this are rather puzzling. We end up with a statistically significant tendency for there to be more tensions when in CC for the Forest Ran- ger problem, hut in FTF for the Arctic problem. Substantively, the proportion of these communica- tions is very ~m~ll in nny c~e, and therefore, the small differences are not importasz. 12. "Shows antagonism, deflates other's s~atus, de- fends or asserts self" This includes autocratic attempts to control or di- rect others, rejection or refusal of a request, de- riding or criticizing others. This is infrequent in both media and there are no significant differences. EFFECTS OF ORDER For the most par~, it did not matter whether the CO or the FtF discussion was held first. However, more saggestions were offered on the arctic problem if it was discussed in CC as ~e first problem, but more in FTF discussion if the FTF was preceeded by a CC condition. This is consistent with the tendency for CC to promote more giving of sugEestions; apparently, the tendency carries over to a subsequent f~ce ~o face conversation. This raises the interesting possibi'It"/ that the group process and structure can be permanently changed by the experience of interacting through CC, a change that will carry over even to communications in other modes. Other pieces of evidence from other s~udies, including self reports of participants in long term field trials, indicate the same poasibillty. CONCLUSION Our investigation confirms the hypothesia that there are some signiflcan~ differences in the group com- munication process between face to face and compu- ter mediated discussions. Such differences seem ~o be associated with other characteristics of the medium, such as the greater tendency for minorlt¥ opinions to be maintained, rather than a total group consensus emerginK, in a fuller analysis (Hiltz, Johnson, Arono~¢ch and Turoff, 1980) we show that the observed differences in interaction profiles are highly correlated w~h the abillty of a group to reach con- sensus and wirer the quali~y of group decision reached. APyzapIX Analyses of Variance Bales Categories by Mode and Problem 9Y~h Hested Factorial Arctic Individual % Data Bales Category 1 - Shows Solidarity MeLns Mode of Crm--unicntion FTF CC Order ist 1.6893 2.4348 2.0620 of Problem 2rid 1.6228 2.4437 2.0333 1.6561 2.4392 Nested Design Source SS ~f MS F A 12.2673 1 12.2673 3.9004 B .0166 1 .0166 .0053 A x B .0285 i .0285 .0091 C/AB 37.7414 12 3.1451 1.3745 S/ABC i46.~430 64 2.2881 Tot. i~6.4967 79 Pooled ANOVA Table Val~es Eor F i and 12 a-e=4.75 12 and 64df-1.90 Source SS df MS F A 12.2673 1 12.2673 5.0618 e B .0166 1 .0166 .0068 A x B .0285 i .0285 .01,17 WG 184.18~4 76 2.4234 Tot. 196.4967 79 Table Value for F 1 and 76 df=3.97 *Significant A = mode B = order C/AB a error term for AB, and A x B S/ABC m error term for C/AR WG = Pooled error term The pooled design yields a significant difference he- ,teen the FTF and CC conditions. The CC conditions show a greater percent .of their cn-~ents in ~he cate- gory of shows solidarity. Order of Problem 9v~vh Nested Factorial Forest Ranger Individual % Data Bales CategoI'y 3 - Agrees Means Mode of Co©mmu~icntion FTF CO lat 14.1900 5.461,5 2nd 12.1921 4.1183 9.8273 8.1552 13.1910 4.7914 78 Source A B AxB C/ABC Sl~C Tot. SS 1411.0740 55.9134 2.1232 515.1580 4056.1449 60hO.4135 df I i i 12 64 79 MS 1411.0740 55.9134 2.1232 42.9298 63.3772 Nested Design F 32.8693* 1.3024 .0h95 .677~ Table Values for F 1 and 12 df=4.75 12 and 64 df=l.90 *Significant Pooled ANOVA The following pooled design is not really necessary since one finds the variables significant as above. Source SS df MS F A Ihli.0740 1 ihli.0740 23.h598" B 55.9134 i 55.913h .9296 A x B 2.1232 i 2.1232 .0353 WG ~571.3029 76 60.1487 Tot. 60~0.4135 79 A=mode B=order C/AB=error term for A, B, A x B S/ABC=error term for C/AB WG=Pooled error term Table Value for F 1 and 76 df=3.97 *Significant The nested design yields a significant difference be- tween the FTF and CC Conditions. The FTF conditions show a greater percent of their comments in category 3- Agrees. REFERENCES Bales, Robert 1950 Interaction Process Analysis; A Method for the Study of Small Groups. Reading, Mass; Addison Wesley. Bales, Robert F. and Edgar F. Borgatta 1955 "Size of Group as a Factor in the Interaction Profile." In A.P. Hare, E. F. Borgatta and R. F. Bales, eds., Small Groups: Studies in Social Inter- action, pp. 396-413. New York: Knopf. Eady, Patrick M. and J. Clayton LafferZy 1975 "The Subarctic Survival Situation." Plymouth, Michigan: Experiential Learning Methods. Hiltz, Starr Roxanne 1975 "Communications and Group Decision Making"; Ex- perimental Evidence on the Potential Impact of Compu- ter Conferencing. Newark, N.J., Computerized Confer- enclng and Communications Center, New Jersey Institute of Technology, Research Report No. 2. Hiltz, Starr Roxanne, Kenneth Johnson, Charles Arono- vitch and Murray Turoff 1980 Face to Face Vs. Computerized Conferences: A Con- trolled Experiment. Hiltz, Starr Roxanne, Kenneth Johnson, and Ann Marie Rabke 1980 Communications Process and Outcome in Face to Face Vs. Computerized Conferences. Hiltz, Starr Roxam.ne and Murray Turoff 1978 The Network Nation: Human Commanication via Com- puter. Reading, Mass,: Add/son Wesley Advanced Book Program. ACKNOWLKDG~4ENTS The research reported here is supported by a grant from the Division of Mathematical and Computer Sciences (MCS 78-00519). The findings and opinions reported are solely those of the authors, and do not necessarily re- present those of the National Science Fo~u%dation. Murray Turoff and Charles Aronovitch played a large part in the design and analysis for this project. We are also grateful to Julian Scber and Peter and Trudy John- son-Lenz for their contributions to the design of the experiments; to John Howell and James Whitescarver for their software design and programming support; and to our research assistants for their dedicated efforts in carrying out the experiments and coding questionnaires: Joanne Garofalo, Keith Anderson, Christine Naegle, Ned O'Donnell, Dorothy Preston, Stacy Simon and Karen Win- ters. We would also like to thank Robert Bales and Experimen- tal Learning Methods for their cooperation in providing documentation and permission to use adaptations of prob- lem solving tasks which they originally developed.
1980
20
WHAT TYPE OF INTERACTION IS IT TO BE Emanuel A. Schegloff Department of Sociology, U.C.L.A. For one, like myself, who knows something about human interaction, but next to nothing about computers and human/machine interaction, the most useful role at a meeting such as this is to listen, to hear the troubles of those who work actively in the area, and to respond when some problem comes up for whose solution the prac- tices of human interactants seems relevant. Here, therefore, I will merely mention some areas in which such exchanges may be useful. There appear to be two sorts of status for machine/tech- nology under consideration here. In one, the interac- tants themselves are humans, but the interaction between them is carried by some technology. We have had the tel- ephone for about lO0 years now, and letter writing much longer, so there is a history here; to i t are to be add- ed video technology, as in some of the work reported by John Carey, or computers, as in the "computer conferenc- ing" work reported by Hiltz and her colleagues, among others. In the other sort of concern, one or more of the participants in an interaction is to be a computer. Here the issues seem to be: should this participant be designed to approximate a human interactant? What is required to do this? Is what is required possible? l) If we take as a tentative starting point that person- person interaction should tell us what machine-person in- teraction should be like (as Jerry Hobbs suggests in a useful orienting set of questions he circulated to us), we s t i l l need to determine what type of person-person in- teraction we should consult. It is common to suppose that ordinary conversation is, or should be, the model. But that is but one of a number of "speech-exchange sys- tems" persons use to organize interaction, or to be or- ganized by in it."t~eetings," "debates, .... interviews," and "ceremonies" are vernacular names for other techni- cally specifiable, speech-exchange systems orgainzing person-person interaction. Different types of turn-tak- ing organization are involved in each, and differences in turn-taking organization can have extensive ramifica- tions for the conduct of the interaction, and the sorts of capacities required of the interactants. In the de- sign Qf computer interactants, and in the introduction of technological intermediaries in human-human interac- tion, the issue remains which type of person-person in- teraction is aimed for or achieved. For example, in the Pennsylvania video link-up of senior citizen homes, John Carey asks whether the results look more like conversa- tion or like commercial television. But many of details he reports suggests that the form of technological inter- vention has made what resulted most like a "meeting" speech exchange system. 2) The term "interactive" in "interactive program" or in "person/machine interaction" seems to refer to no more than that provision is made for participation by more than one participant. "Interactive" in this sense is not necessarily "interactional," i.e., the determi- nation of at least some aspects of each party's partic- ipation by collaboration of the parties. For the "talk" part of person-person interaction, a/the major vehicle for this "interactionality" is the sequential organiza- tion of the talk; that is, the construction of units of participation with specific respect to the details of what has preceded, and thereby the sequential position in which a current bit of talk is being done. Included among the relevant aspects of "what has preceded" and "current sequential position" is "temporality," or "real time," though not necessarily measured by conventional chronometry. What are, by commonsense standards, quite tiny bits of silence -- two tenths of a second, or less (what we call micro-pauses) -- can, and regularly do, have substantial sequential and interactional conse- quences. The character of the talk after them is regu- larly different, or is subject to different analysis, in- terpretation or inference. Although the telephone deprives interactants of visual access to each other, i t leaves this "real time" tempo- rality largely unaffected, and with it the integrity of sequential organization. Nearly all the technological interventions I have heard about -- whether replacing an interactant, or inserted as a medium between interactants -- impacts on this aspect of the exchange of talk. It is one reason for wondering whether retention of ordinary conversation as the target of this enterprise is appro- priate. For some of the contemplated innovations, like computer conferencing, exchanges of letters may be a more appropriate past model to study, for there too more than one may "speak" at a time, long lapses may intervene between messages, sequential ordering may be puzzling (as in "Did the letters cross in the mail?") etc. 3) Sequential organization has a direct bearing on an issue which must be of continuing concern to workers in this area -- that of understanding and misunderstanding. It is the sequential (including temporal) organization of the talk which, in ordinary conversation, provides running evidence to participants that, and how, they have been understood. The devices by which troubles of under- standing are addressed (what we call "repair," discussed for computers by Phil Hayes in a recent paper) -- re- quests for repetition or clarification and the like -- are only one part of the machinery which is at work. Regularly, in ordinary conversation, a speaker can detect from the produced-to-be-responsive next turn of another s/he has or has been, misunderstood, and can immediately intervene to set matters right. This is a major safe- guard of "intersubjectivity," a retention of a sense that the "sa~ thing" is being understood as what is being spoken of. The requirements on interactants to make this work are substantial, but in ordinary conversation, much of the work is carried as a by-product of ordinary se- quential organization. The anecodotes I have heard about misunderstandings going undetected for long stretches when computers are the medium, and leading to, or past, the verge of nastiness, suggest that these are real prob- lems to be faced. 4) In all the business of person-person interaction there operates what we call "recipient-design" -- the de- sign of the participation by each party by reference to the features (personal and idiosyncratic, or categorial) of the recipient or co-participant. The formal machin- eries of turn-taking, sequential organization, repair, etc. are always conditioned in their realization on par- ticular occasions and moments by this consideration. I don't know how this enters into plans for computerized interactants, and i t remains to be seen how i t will enter into the participation of humans dealing with computers. Persons make all sorts of allowances for children, non- native speakers, animals, the handicapped, etc. But there are other allowances they do not make, indeed that don't present themselves as allowances or allowables. What is involved here is a determination of where the ro- bustness is and where the brittleness, in interacting with persons by computers, for in the areas of robustness it may be that many of the issues I've mentioned may be safely ignored; the people "will understand." BI Throughout these notes, we are at a very general tevel of discourse. The real pay-offs, however, will come from discussing specifics. For that, interaction will be need- ed, rather than position papers. 82
1980
21
THE COMPUTER AS AN ACTIVE COMMUNICATION MEDIUM John C. Thomas IBM T. J. Watson Research Center PO Box 218 Yorktown Heights, New York 10598 I. THE NATURE OF COMMUNICATION goals r4imetacomments that direct the conversation[~ Communication is often conceived of in basically the following terms. A person has some idea which he or she wants to communicate to a second person. The first person translates that idea into some symbol system which is transmitted through some medium to the receiver. The receiver receives the transmission and translates it into some internal idea. Communica- tion, in this view, is considered good to the extent that there is an isomorphism between the idea in the head of the sender before sending the message and the idea in the receiver's head after recieving the mes- sage. A good medium of communication, in this view, is one that adds minimal noise to the signal. Mes- sages are considered good partly to the extent that they are unabmiguous. This is, by and large, the view of many of the people concerned with computers and communication. For a moment, consider a quite different view of com- munication. In this view, communication is basically a design-interpretation process. One person has goals that they believe can be aided by communicating. The person therefore designs a message which is intended to facillitate those goals. In most cases, the goal in- cludes changing some cognitive structure in one or more other people's minds. Each receiver of a mes- sage however has his or her own goals in mind and a model of the world (including a model of the sender) and interprets the received message in light of that other world information and relative to the perceived goals of the sender. This view has been articulated further elsewhere !~]. This view originates primarily from putting the rules of language and the basic nature of human beings in perspective. The basic nature of human beings is that we are living organisms and our behavior is goals- directed. The rules of language are convenient but secondary. We can language rules for a purpose break. Communicating in different media produces different behaviors and reactions I-2,3! The interesting first order finding however, is that people ca. communicate using practically any medium that lets any signal through if motivation is high enough. We can, under some circumstances, communicate with people who use different accents, grammars, or even languages. Yet, in other circumstances, people who are ostensibly friends working on a common goal and who have known each other for years end up shouting at each other: 'You're not listening to me. No, you don't un- derstand!' One fundamental aspect of human communication then is that it is terrifically adaptive, and robust, containing a number of sophisticated mechanisms such as expla- nations that simultaneously facillitate social and work and rules for taking turns 6~ To the extent that these mechanisms can be embed- ded in a computer system that is to dialogue with hu- mans, the dialogue will likely tend to be more suc- cessful. However, equally true of human communica- tion is that it is sometimes quite ineffective. Let us examine where, why, and how the computer can help improve communication in those cases. 2. FUNDAMENTAL DIFFICULTIES IN COMMUNICA TION The view of communication as a design-interpretation process suggests that since messages are designed and interpretted to achieve goals, the perceived rela- tionship between the goals of the communicators is likely to be a powerful determinant of what happens in communication. Common observation as well experi- mental resutts[l!are consistent with this notion. Peo- ple often view themselves in situations of pure compe-. tition or pure cooperation. In fact, I suggest that ei- ther perception is due to a limited frame. Any two people who view themselves as involved in a zero-sum game are doing so because they have a limited frame of reference. In the widest possible frame of refer- ence, there is at least one state probabilistically influ- enced by their acts (such as the total destruction of human life through nuclear weapons) that both would find undesirable. Therefore, when I am playing tennis, poker, or politics with someone and we say we are in pure competition, we are only doing so in a limited framework. In a wider framework, it is always in our mutual interest to cooperate under certain circum- stances. This does not mean, however, that people perceive this wider framework. Because of the limitations of human working memory, people often forget that there is a framework in which they can cooperate. Indeed, this describes one of the chief situations in which a so-called breakdown of communications occurs. If we are truly in a zero-sum game, communication is only useful to the extent that we mislead, threaten, etc. Conversely, people are only in pure cooperation by limiting their framework. I suggest that it is highly likely, given any two individuals, that they would put a different preference ordering on the set of all possible states of the world which their actions could probabil- istically affect. This gives rise to a second type of breakdown in communication. People appear to be desiring to cooperate but they are only cooperating with respect to some limited framework X. They are competing with respect to some larger framework X plus Y. The most common X plus Y is X, the frame- work of cooperation plus Y, a consideration of whose habits must change for mutually beneficial action in the framework X. 83 For instance, two tennis partners obviously both want to win the game. Yet one is used to playing with both partners attempting to take the net. The other is used to the 'one-up, one-back' strategy. They can get into a real argument. What they are competing about is basically who is going to change, whose opinion is wrong, and similar issues. This then, in a sense, is a second type of breakdown of communication. A third case exists even within the framework of coop- eration. This case of difficult communication exists when the presupposed conceptual frameworks of the communicators is vitally discrepant. A computer pro- grammer really wants to help a business person auto- mate his or her invoicing application and the business person really wants this to happen. However, each party erroneously presumes more shared knowledge and viewpoint than in fact exists. A puzzle still remains however. If people have such sophisticated, graceful, robust communication mecha- nisms, why do they not quite readily and spontaneous- ly overcome these communication blocks? WIDESPREAD ANTI-PRODUCTIVE BELIEFS The biggest stumbling blocks to effective communica- tion are the individual communicator's beliefs. People typ~c,,lly hold beliefs which are not empirically based. To some extent, it is impossible not to. In order to sim- plify the world sufficiently to deal with it, we make generalizations. If it turns out on closer inspection that these genralizations are correct, we call it insight while if it turns out that they are incorrect, we call it overgeneralization. There are, however, a number of specific non- empirically based beliefs that people are particularly likely to believe which are anti-productive to commu- nication. Among these are the following: 1. I must be understood; 2. If the other person disagrees with me, they don't understand me; 3. My worth is equal to my performance; 4. Things should be easy; 5. The world must be fair; 6. If I have the feeling of knowing some- thing is true, it must be true; 7. If the other person thinks my idea is wrong, the person thinks little of me; 8. If this person's idea is wrong, the person is worth- less; 9. I don't need to change -- they do; 10. Since I already know I'm right, it is a waste of time to really try to see things from the other person's perspective. 11. If I comprehend something, in the sense that I can rephrase it in a syntactically different way, that means I have processed deeply enough what the other person is saving. 12. I must tell the truth at all times no mat- ter what. 13. If they cannot put it in the form of an equation (or computer program, or complete sen- tences, or English), they don't really Know what they are talking about and so it is not possibly in my inter- est to listen. Each of the above statements, has a correlated, less rigid, less extreme statement that is empirically based. For instance, if we really thought 'When I am wrong, some people will temporarily value me less', that is valid generalization. In contrast, the thought 'When am wrong, people will value me less' is an overgener- alization. Similarly, it is quite reasonable to believe that ex- pressing something mathematically has advantages and that if it is not expressed mathematically it may be more difficult for me to use the ideas; it may even be so difficult that I choose not to bother. It is not empirically based to believe that it is never worth you while to attempt to understand things not expressed in equations. Nearly everyone, even quite psychotic people hold rational as well as irrational beliefs. Very few people when asked whether they have to be perfect in every- thing will say yes. However, very many people reject so completely evidence that they may be fundamental. ly wrong, that they act as though they must be per- fect. It is bitter irony that most people can think and feel much more clearly about the things that are less important to them such as a crossword puzzle than they can about things that are much more important such as their major decisions in work and love. Now let us imagine someone who has done a certain office procedure a certain way for many years. Then someone begins to explain a new procedure that is claimed to work better. There are a number of wholly rational reasons why the experienced office worker can be skeptical. But it is probably quite worthwhile to at least attempt to really understand the other person's ideas before criticizing them. There are many non-empirically based beliefs that may interfer in the communication process. The experienced office worker may, for instance, notice the young age of the systems analyst and believe that no-one so young could really understand what is going on. They may believe that if there is a better way, they should have seen it themselves years ago and if they didn't they must be an idiot. Since they didn't see it and they can't be an idiot, there must not be a better way. They may just think to themselves it will be too hard to learn a new way. Very effective individual therapy ~]is based on trying to identify and change an individual's irrational beliefs. The focus of this paper however is on how a computer system could aid com- munication by overcoming or circumventing such irra- tional beliefs in those cases where communication appears to break down. We know that people are capable of changing from a narrow competition framework to a wider cooperative framework in order to communicate. People can re- solve differences about whose behavior needs to change. Normal communication has the mechanisms to do these things; when they fail to happen it is often because of irrational beliefs which prevent people from attempting to see things from the other person's perspective. The t~nnis partner's disagreeing about what strategy to use will tend to resolve the disagreement without detriment to their mutual goal of winning the game, provided their thinking stays fairly close to the empiri- cal level. If, however, one of the participants finds a 84 flaw in the other's thinking and then overgeneralizes and thinks 'What an idiot. That doesn't logically fol- low. How can anyone be so dumb.' But by the token 'dumb', the angry person probably means 'all-around bad.' Now this is an extrememly counter-productive overgeneralization which will tend to color the person's thinking on other issues of the game which are not even within the scope of the argument about what strategy to use, In extremely irrational but not so uncommon cases, the person may even express to the other person verbally or non-verbally that they have a generally low opinion of their partner. If either party becomes angry, they are also likely to mix up their messages about their own internal state with messages about the content of the game. Thus, '1 am angry,' gets mixed with 'A serve to that person's backhand will probably produce a weaker return.' The result may be a statement like 'Why can't you serve to his backhand for a change.' Such a statement is likely to increase the probability of serves to the forehand or double faults to the backhand. Once each person becomes angry with the other, they are almost certainly overgeneralizing to the extent that they are believing that the onty way to improve the situation is for the other person to change their be- havior in some way 'He should apologize to me for being such an idiot.' No active problem solving behav- ior remains directed where it belongs: 'How can I im- prove the situation myself? How can I communicate better?' This is communication breakdown. 4. THE POSSIBLE USES OF AN ACTIVE COMMU- NICATION CHANNEL Now, let's just for the sake of arguement, =,surae or if you like pretend that what I have said so far is a useful perspective. What about the computer? In particular, what about using the power of the computer as a non- transparent ACTIVE medium of communication? The computer has been very successfully used as a way for people to communicate which allows speed/repetition and demands precision. Is there also a way for the computer to be used to enhance party- to-party communication in a way that helps defeat or get around the self-defeating beliefs that get in the way of effective communication in situations where participants have similar goals but are working in dif- ferent frameworks? Can the computer aid in situations where participants have partially similar goals but are concentrating on the differences...or are unable to arrive at conclusions that are in both parties self- interest because of interferrence from a set of sepa- rate issues where they are in fundamental conflict? An entire technology equal to the one that has ad- dressed the speed/repetion precision issues could be built around this task. Clearly I cannot provide this technology myself in fifteen minutes or fifteen years. But let me provide one example of the k~nd of thing I mean. Suppose that one two people were disagreeing and communicating via Visual Display Terminals con- nected to a computer network. Let us suppose that the computer network imposed a formalism on the communication. Suppose, for example that strength and directionality of current emotional state were en- coded on a spatially separate channel from content messages. Imagine that the designer of the message had to choose what emotion or emotions they felt and attempt to honestly quantify these. This information would be presented to the other person separately from the content statements. One unfortunate human weakness would be overcome; viz., the tendency to let the emotional statement -- '1 am angry' intrude into the content of what is said. Now, suppose the computer network presented to the interpretter of this message a set of signals labelled as follows: 'The person sending this message to you is currently producing the following emotional states in themselves: Anger +7, Anxiety +4, Hurt +3, Depres- sion +2, Gladness -6.' Note that the attribution has also been shifted squarely to where it belongs -- on the person with the emotional state. Now suppose further that when a person stated their position, certain key words triggered a request by the system for restatement. For instance, suppose a per- son typed in 'You always get what you want.' The sys- tem may respond with: 'Regarding the word 'always', could you be more quantitative. First, in how many instances during the last two weeks would you esti- mate that there have been occassions when that per- son would like to have gotten something but could not get that thing?' Unfortunately, asked just such a question, an angry person would probably become angrier and direct some anger toward the active channel itself. A mar- riage counselor is often caught in just this sort of bind, but can usually avoid escalating anger via empa- thy and other natural mechanisms. How a computer- ized system could avoid increasing anger remains a challenge. Another possibility would be for the channel to enforce the protocol for conflict resolution suggested by Rap- paport and others. For instance, before stating your position, you would have to restate your opponent's position to their satisfaction. Needless to say, participants using such an active interface would be apprized of the fact and voluntarily choose to use such an interface for their anticipated mutual benefit in the same way that labor and man- agement often agree to use a mediator or arbitrator to held them reach an equitable solution. Unfortunately, such a choice requires that both the people involved recognize that they are not perfect -- that their com- munication ability could use an active channel. This in itself presupposes some dismissal of the erroneous belief that their worth EQUALS their performance. Most people are capable of doing this before they become emotionally upset and hence might well agree ahead of time to using such a channel. 5. SUMMARY In this paper, I reiterate the view that for many pur- poses, communication is best conceived of as a 85 design-interpretation process rather than a sender- receiver process. Fundamental difficulties in two- person communication occur in certain common situa- tions. The incidence, exacerbation, and failure to solve such communication problems by the parties themselves can largely be traced to the high frequency of strongly held anti-empirical belief systems. Finally, it is suggested that the computer is a medium for hu- mans to communicate with each other VIA. Viewed in this way, possibilities exist for the computer to be- come an acti~ and aelecti~ rather than a p~.s~, tn=nJparent medium. This could aid humans in overcoming or circumventing communication blocking irrational be- liefs in order to facillitate cooperative problem solving. 6. REFERENCES [1]Thomas, J. A Design-lnterpretion Analysis of Natural English. International Journal of Man-Machine Studies, 1978, 10, 651-668. [2]Carey, J. A Primer on Interactive Television, Journal of the University Film Aa$ociation, 1978, XXX (2), 35-39. IJ]Chapanis, A. Interactive Human Communication: Some Lessons Learned from Laboratory Experiments. Paper presented at NATO Advanced Study Institute on 'Man- Computer Interaction', Mati, Greece, 1976. [~]Wynn, E. Office Conversation as an Informatior~Medium. (In preparation). Is~Thomas, J. A Method for Studying Natural Lamguage Dialogue, /aM Re~=rch Rc, o.. 1976, RC-5882. [6]Sacks, H., Schlegloff, F_,. and Jefferson, G. A Simplest Systematics for the Organization of Turn-ta~ng for Conversation, L~ua~re. 1974, 50 (4), 696-735. ['1iEIlis, A. Reason oJtd Emotion in Psychothemoy. New York: Lyle Stuart, (196Z). 86
1980
22
WHAT DISCOURSE FEATURES AREN'T NEEDED IN ON-LINE DIALOGUE Eleanor Wynn Xerox Office Products Division Palo Alto, California It is very interesting as a social observer to track the development of computer scientists involved in AI and natural language-related research in theoretical issues of mutual concern to computer science and the social study of language use. The necessity of writing programs that demonstrate the validity or invalidity of conceptualizations and assumptions has caused computer scientists to cover a lot of theoretical ground in a very short time, or at least to arrive at a problem area, and to see the problem fairly clearly, that is very contemporary in social theory. There is in fact a discrepancy between the level of sophistication exhibited in locating the problem area (forced by the specific constraints of programming work) and in the theorizations concocted to solve the problem. Thus we find computer scientists and students of language use from several disciplines converging in their interest in the mechanics and metaphysics of social interaction and specifically its linguistic realization. Attempts to write natural language programs delivered the reali- zation that even so basic a feature as nominal reference is no simple thing. In order to give an "understander" the wherewithal ~o answer simple questions about a text, one had to provide it with an organized world in which assumptions are inferred, in which exchanges are treated as part of a coherent and minim-fly redundant text, in which things allow for certain actions and relations and not others, and for which it is unclear how to store the information about the world in such a way that it is accessible for all its possible purposes and delivered up in an appropriate way. Some of these were providahle and some weren't. Some AI workers have already moved into the phenomenological perspective, Just from con- fronting these problems -- a long way to go from the assumptions of m~thematics, science, and engineering that they originally brought to the task. Others, in their attempts to deal with issues of repre- sentation and motivation in discourse, have started recreating segments of the history of social theory. This is the history and perspective that students of social interaction bring with them to the problem. They arrive at the problem area either through a theoretical evolutionary process in which they reject the previous stage of theory, and interaction is a good demonstration of the limitations of that theory, or because they are simply intrigued by observing the wealth of social action with which they can identify as members, that the study of naturally-occuring discourse provides. In social theory, the ethnomethodological perspective arose as a response to the: i) political implications 2) reifications 3) unexamined assumptions ~) narrow filter on observation presented by structural-functionalist theory, This theory : I) limits and constructs observation fairly strictly 2) Justifies the status quo (whatever exists serves a survival function) 3) posits a macro-organization (well-defined institutions and roles) ~) uses platonic idealizations of the social order 5) is normative 6) doesn't explain change very well 87 Difficulties in this theory were in part an artifact of a general positivist-scien%istic orientation in which there was a motivation to treat the social world as a scientific.object and hence to structure the descript- ion of it in such a way as to make the social world amenable to prediction, testing and control. The ethnomethodological or phenomenological perspective does not give up the scientific pretension but it does drop the engineering motivation. A world whose modus operandi (to avoid saying rules) or practices are con- stantly beir~ created on the spot and which, though following along recognizable tracks, is in a constant state of invention and confirmation, lends itself far less to prediction. In fact it is clearly unpredictable. Language itself provides an analogy, though it is partly the character of language that allows for the constant state of invention in the social world. Language changes constantly by means of several mechanisms, among which are phonological drift, usage requirements, meta- phorization, and social emulation based on values and fashions. For theoretical purposes, one of the most valuable findings in Labor's landmark quantitative studies of phonological variation, was that social values drive the distribution of optional variants from one speech occasion to another accorcling to the per- ceived formality of the occasion. In this manner, values -- what individuals at different social levels consider to be prestigious articulations, drive phono- logical change in general. Linguistic fashions them- selves also change in response to what is currently used, and change with or ngainst the majority according to the kind of identification desired to be made. They cannot be predicted in advance as such changes in value are typically discovered not planned. Very often changes in language use are derivative, based on a secondary or marginal meaning or usnge, or discovered analogy or metaphor of some existing locution. Thus a dynamic of social contrasts and identifications, as well as social mobility and a~p~rations thereto, as well as socially situated invention, are deeply connected to linguistic issues, including language change and the concept of distribution rules, in an empirically observ- able and countable way. These and other social dynamics operate no less for more complex discourse phenomena, and account for large portions of observed discourse strategies. Generally, when a sociolinguist, sociologist, or anthro- pglogist looks at language use, what they attend to are the disclosed social practices. Being aware of, and focussing on social context, with a history of social theor2 or an historically developed set of concepts for social action in mind, aleL~s one to many attributes of the occasion for interaction: the possible social identities and relationships of the participants, the perceived outcomes and the social significance of mean- ings generated in the course of the interaction, as well as to structural and habitual features that reflect social requirements (viz. the "recognition" requirement as a prerequisite to interaction *s taking place at all or in the particular form, as discussed by Schegloff). The fact that a background of shared knowledge about the world is assumed emerges from an examination of what is explicitly stated and from the observation that what is explici¢ is in some way "incomplete", partial, not a full itemization of what is communicated and understood. It is also the case that to spell out all the assump- tions would be unbearably time-consumihg, redundant to the purpose, boring, and possibly an infinite regress; and this practice wot Ld moreover fail to accomplish all those conversational _ urposes which require negotiation, building up to a point of mutual orientation and accord, or the "use" of one person by another for a real or imaginary gain. (cf Si~nel) The messiness, potential ambiguity, implicitness, etc. of natural conversation serve many of the purposes that actors have, including the one of intimacy and mutual- ity by less and less explicit surface discourse. Herein lies an important distinction, one that is not well perceived by workers in AI. Purposes can be, and typically are discovered in the course of interaction rather than planned. Purposes are thus emergent from interaction rather than apriori organizing principles of it. Attempting to code, catalogue, regulate, formalize, make explicit in advance those purposes is reminiscent of structuralist, positivist social theory. To this extent, computer scientists are recreating social theory, start- ing from the point that is most amenable to their hopes and needs, and so far lacking the dialectic that con- textualizes other developments in social theory. Ontogeny has not yet fully recapitulated phylogeny. Extending the plans, goals, frames notion into the wider social world (wider than a story understander), con- stitutes a platonic idealization and the ensuing problem of locating those idealizations somewhere, as if there were large programs running in our heads (some of which need debugging), or as if there were some accessible pool of norms from which we draw each time we act. It posits that we act out these idealizations in our every- day behavior, that our behavior constitutes realized instances of this structure. This conflicts with a "process" notion of interaction, which careful discourse analysis reveals, whereby participants are continually trying out and signalling their participation in a mutual world, presumably because this is not from one instance to the next pre-given. The great revelation of discourse analysis in general, if I may he so sweeping, is the ability to observe the process of social action, whereby the social world is essentially built up anew for the purpose at hand, and interactants can be seen sorting out the agreed-on premises from those that need to be established between them. There are two kinds of concerns here that bear upon on- line dialogue research. One is the notion of person, social identity, etc. The other is the notion of interaction as a reality testing mechanism that grounds the individual in a chosen point of view frem among the many interpretations available to him for any given "event'. Both of these notions differentiate the com- puter from a person as an interactant. Sorting out dialogue issues that embody these notions, narrows down the field of concerns that are relevant for building "robust" on-line dialogue systems. All social systems, including non-human ones, display social differentiation. This is a central notion that the AI path of evolution does not bring to the study, of discourse. On the contrary, discourse problems are treated as if there were a universality among potential interactants. This fits very nicely w/th a platonic perspective. K_ling and Scacchi have referred to this as the rationalist perspective, and they c°te claims made for simulation and modelling as their illustration of how exponents of this perspective fail to make even gross social distinctions: "Neglecting the obiter dicta claim that modelling and simulation ~-e 'applicable to essentially all problem- solving and d~:ision-m&xing,' presumably including ethical decisions, one is left with an odd account of the problem of modelling. Models are 'far from ubiqui- tous' and 'the trouble is' they are difficult and costly to develop and use. But the appropriateness of modell- ing is not linked by (rational perspectivists) to any discernible social setting or the interests of its participants. (Their) claims are not aimed at policy- making in particular. They could include simulations 88 for engineering design as well as for projecting the costs of new urban development. However, their co-,,ents typify the rational perspective when it is applied to information systems in policy-making; the presumption is that differences in social settings make no difference." Work in socio-linguistics, on the other hand, has focussed on how speech varies by situation, by relation- ship, by purpose and by many other constraints that de- pend upon both a typification of the other from a complex set of loose attributes and the discovery of his unique behavior ~n the situation. The notion of a linguistic "repertoire" expresses people's demonstrated ability and propensity to adjust their speech at almost every analytic level, down to the phonology, to their perception of the situation and the audience. There are variations in people's skill at this, but all do it. To the extent that they don't do it, they risk being in- appropriate and not getting rewards from interaction. (see F. Erickson for a study of the outcomes of inter- active strategies in ethnically mixed interactions.) The structuralist perspective again may be an appealing way for computer scientists to approach the problem of differentiation of persons, as it posits an essentially limited set of "roles" of fairly fixed attributes, and posits as well an ordered hierarchical arrangement of those roles. With this framework in mind it is rela- tively easier to imagine a computer as a viable partici- pant in a social interaction, as it should be possible to construct an identifiable role for it. With this rather flat view of human social perception it is also possible to imagine a person requiring of a computer that it behave appropriately in a conversation , without regard for the fact that a computer 6an only satisfy a very limited set of purposes for that person in inter- action. In fact people know perfectly well many of the things computers can't do for them or to them, things which other people can do and hence which need to be taken into account in dealing with other people. And they are able to differentiate for the purpose of inter- action among infinitely many people, and states of mind or situation those people can be in. The other feature of interaction between people, reality- testing, is less well understood than differentiation, which is a veritable solid ground of social understand- ing. However, it can be seen in interactions, even very simple task-oriented ones such as I described in my thesis, that people are also always accessing each other for a view of the world, for agreement, disagreement, and a framework for interpreting. Diffuse explanation mechanisms(Wynn, 1979) also exhibit the tendency of speaker to nail down the audience's perception of him- self to the framework of interpretation desired by him, as an implicit acknowledgement of possible variance. What is often uncertain in an actor's "model" or pro- Jection, or understanding of the other participants or observers, is their view of the actor himself. To this end, he fills in and guides the interpretation with additional context any time he perceives an occasion for misinterpretation, sometimes to the point of logical absurdity (but ~ractical appropriateness if not necessity). since a computer is not an actor in the social world, its interpretations, both of oneself and of "events" perceived social phenomena-- don't really count. A com- puter can provide facts about the world within a well- understood framework, but it cannot provide the kind of context that comes from being a participant in social life, nor a validation of another's perception, except to the extent that matters of "fact" or true-false dis- tinctions allow this. And in these cases, the person supplies this validation himself from the information. This may be a moot point, but I maintain that the search for agreement, confirmation, etc., and the related search for common ground or reality are basic motives for interaction, along with confirmations of member- ship and solidarity etc., as described in the work of Schegloff and of much earlier writers like Malinowski and Si~nnel. Rather than working from careful and detailed observa- tions of the real world, excepting such innovators as Grosz and Robinson, many computer scientists exhibit a tendency to develop their "'models" of interaction by conceptualizing from the perspective of the machine and its capabilities or possible capabilities. Discourse features may be selected for attention and speculation because they offer either a machine analog or a machine contrast. Thus we people are attributed information structures, search procedures and other constructs which are handy metaphors from the realm of computerdom; and it would be especially handy if we were in fact con- structed according to these clean notions, so that our thinking and behavior could he modelled. (In all fair- ness, I know computers have "guys" running around inside them, "going" places, "looking for" stuff, trying out things, getting excited or upset, going nuts, giving up, etc.) Working from the machine perspective can lead to some gross observational oversights, and the authors of the oversight I've picked as an example will hopefully in- dulge me. The implicit confirmationhypothesis (Hayes and Reddy) could never have been hypothesized by anyone who studies language behavior from a social perspective, as one of the oldest conversational observations around is the explicit confirmation observation. The phatic communion notion is over 30 years old, and is perhaps the first attention given to those features of inter- action whichwere initially considered to carry little or no observable propositional content or information. Included in these hehaviorsare those discourse "fillers" that signal to the speaker he is being received with no problem, that the listener is still paying attention (even more basic than confirming), and that the listener is a participant in the rhythm of the interaction even though'he is producing little speech at the moment. The "rights" and "~ehhehheh's" of the current natural con- versation transcription conventions are absolutely per- vasive and omnipresent. Nods, "hm's", gaze, prompt questions, frowns, smiles, exclamations of wonder, are all explicit confirmation devices constantly used in conversation, and occur especiallywhennew propositions or details essential to building a story are presented. Speakers are also often tentative and reformulate at any evidence of withheld confirmation, like a "blank stare" or a frown from the audience. Therefore it is by no means ungraceful to explicitly confirm, and on the other hand, it takes very little to do so. But the point is this: even if the implicit con- firmation hypothesis were true (and I pick it because it is an available ex-mple and very easy to reject-- other notions would do a~ well but require a more detailed attack), it would be no reason to exclude this feature from a com~uter dialogue nor to suppose that it would pose people any difficulty in handling a d/alogue with a machine. The discourse supporting activities of natural conversation always address practical concerns, If a new concern should 8/'isebecause of newconstraints-- e.g. that the interactant is a machine--these will be incorporated in the ongoing details of communication. For instance~ when it is obvious someone is having diffi- culty speakin- and understanding English, we unhesita- tingly drop all ellipsis and give full articulation of every sound, even though this produces great redundancy in the message for purposes of communicating with another native speaker, and is moreover extremely unhabitual. 89 In fact, the social role of the computer is perhaps most like that of a foreigner. We assume a foreign individual w~ose English is poor to have an ability to communicate, perhaps a rudimentary ~Ta-..a~ and vocahuis/'y of our language, and a set of customs, some of which overlap with ours. But we can't take the specifics of any of these things for granted. There is very little in the way of a background of practices or assumptions to work with. But here the analogy ends. Presumably, we won't be going to on-line dia)ogue programs to chit-chat. The purposes will be fairly well-defined and circumscribed. People will interact with a computer: i) because there is no person available 2) because there is lim/ted social confront in accessing expert information from a computer, so it is available in a metaphorical sense 3) because the computer has specialized abilities and resources not found in a single individual 4) because it coordinates non- local information and 5) is maximally up-to-date -- changes in status and the news of this are concurrently available and 6) the outcome of one's own interaction with the system may be anim~ediately registered action, like reserving a space and hence making one less space available to subsequent users 7) because actual searching (as opposed to the metaphoric kind attributed to our minds by cognitive scientists) of a large database may be required and the computer is much better and faster at this than we are. In other words, our reasons, certainly our most solid an d fulfil!able reasons, for conSUlting compu~ersand engaging in discourse with them will beto find out things relating to a framework we already have. The computer needs to know a few things about us and especially our language, and especially needs to know how to ask usto clarify what we said, even to present menus of in~entions for us to choose from as a response to something unexecutable by it. But more than anything, it needs to be able to make its structure of informa- tion clear to us. In this sense it will satisfy certain "person- properties -- we have working notions of at least the parameters and starting points for negotiation with people. Whereas with computers we have at best an entry strategy for an unfamiliar system, but very little to go on in common knowledge for assessing its informedness or even consistency. So on-line dialogue should not be like person-to-person dialogue in many respects. For instance, being overly explicit with a person is an indication of a Jud~aent we have made about their competence, This Judgment is quite likely to be offensive if it's wrong. (Sehegloff) This is not likely to be a problem with a computer from an experiential social action point of view. Who cares if the computer cannot perceive that we are competent members of some social category defined bya more or less common body of knowledge: We will have no proble~ in telling it what level to address in dealing with us, if it has any such levels of explicitness, nor in gear- ing our own remarks to the appropriate level once we find out what it can digest. On-line dialogue systems therefore have an ongoing task of representing th~ - selves, not the whole interactive world; and designers need not concern themselves so much with providing their systems with models of users, but rather providing users withFlear models of the system they are interacting with. These are the major concerns, obviously. I wish I could now deliver the par~ of the paper that you, d be of most interest: what a dialogue system should contain and how it can m~ke available those contents in order to realize the purposes Just stated. Instead I have addressed myself to what look like common fallacies that I see in attempting to incorpor- porate natural language dialogue issues into computer dialogue issues without access to the social under- stand/rigs embedded in social interaction research. 90
1980
23
Parsing w. A. Martin Laboratory for Computcr Science Massachusetts Institute of Technology Cambridge, Massachu.~tts 02139 [.ooking at the Proceedings of last year's Annual Meeting, one sccs that the session most closely parallcling this one was entitled Language Structure and Par~ing. [n avcry nice prescnu~fion, Martin Kay was able to unite the papers of that scssion uudcr a single theme. As hc stated it. "Illcre has been a shift of emphasis away from highly ~tmctured systems of complex rules as the principal repository of infi~mmtion about the syntax of a language towards a view in which the responsibility is distributed among the Icxicoo. semantic parts of the linguistic description, aod a cognitive or strategic component. Concomitantly, interest has shiRed from al!lorithms for syntactic analysis and generation, in which the central stnlctorc and the exact seqtlencc of events are paramount, to systems iu which a heavier burden is carried by the data stl ucture and in wilich the order of,:vents is a m,.~ter of strategy. ['his ycar. the papers of the session represent a greater diversity of rescan:h directions. The paper by Hayes. and thc paper by Wilcnsky and Aren~ arc both examples of what Kay had in mind. but the paper I)y Church, with rcgard to the question of algorithms, is quite the opposite. He {tolds that once the full range uf constraints dcscribing pc~plc's processing behavior has been captul'ed, the best parsing strategies will be rather straightforwarcL and easily cxplaincd as algorithms. Perilaps the seven papers in this year's session can best be introduecd by briefly citing ~mc of the achievcmcqts and problems reported in the works they refcrence, In thc late i960"s Woods tweeds70] capped an cfTort by several people to dcvch)p NI'N parsing. 'lllis well known technique applies a smdghtforward top down, left CO right` dcpth fic~t pat.~ing algorithm to a syntactic grammar. I-:~pccialiy in the compiled fi)rm produced by Ilorton [Bnrton76~,]. the parser was able to produce the first parse in good time. but without ~mantic constraints, numcroos syn~ictic analyses could be and ~,mctimcs were fou.nd, especially in scntenccs with conjunctions. A strength of the system was the ATN grammar, which can be dc~ribcd as a sct of context frec production rules whose right hand sides arc finite statc machincs and who.~ U'ansition arcs have bccn augmented with functions able to read and set registers, and also able to block a transition on their an:. Many people have found this a convenient fonnulism in which m develop grammars of Engtish. The Woods ATN parser was a great success and attempts were made to exploit it (a) as a modc[ of human processing and (b) as a tool for writing grammars. At the same time it was recognized to havc limimdoos. It wasn't tolerant of errors, and it couldn't handle unknown words or constructions (there were n~'tny syntactic constmcdons which it didn't know). In addidon, the question answering system fed by the parser had a weak notion of word and phrase .~emantics and it was not always able to handle quantificrs properly. It is not ctcar thcs¢ components could have supported a stronger interaction with syntactic parsing, had Woods chosen to a~cmpt it. On the success side. Kaplan [Kaplan72] was inspired to claim that the ATN parser provided a good model tbr some aspects of human processing. Some aspects which might bc modeled are: Linnuistic Phenomenon Prefcrred readings of Ambiguous Sentences Garden ~th Sentences Perceived Complexity Differences Center Embedding Pounds A'rN Comnntadonal Mechanism Ordcred Trying of Alternative Arcs Back-tracking Hold List Costing Counting Total Transitions None [n one study, most pcople got the a) reading of 1). One can try to explain des l) Thcy told the girl that Bill liked the story. la) They told the girl [that [Bill liked the scoryJs ]. lb) Th~ told [the girl that Bill likedlN P the story. by ordering the arcs leaving the state where the head noun of'an NP has been ~'ccpccd: a Ix)p am (tcrminuting the NP) is tried before an an: accepting a modifying relative clause. ]-h)wcver, Ricil [Rich75] puims out that dfis an: ordering solution would seem to have diltlculdcs with 2). This sentence is often nut peracived 2) They told the girl that Bill liked that he would be at the loath;all game. as requiring backup, yet if the arcs an: ordered as for I), it does require backup. There is no doubt that whatever is going on. the awareness of backup in 3) is so much stronger than in 2) that it seems like a different phenomcnoo. To resolve this, 3) The horse raced past the b,'u'n fell. one could claim that perceived backup is some fimction of' the length of the actual b~kup, or maybe of the degree of commiunent to the original path (althoogh it isnt clear what this would mean in ATN terms). In this session. Ferrari and Stock will turn the are ordering game around and describe, for actual tex~ the probability that a given arc is the correct exit an: from a node. given the an: by wiuch the parser arrived at the node. [t will be intcr~ting to look at their distributions. [n the speech project at IBM War, sou Laboratories [Baker75] it was discovered some time ago that, for a given text, the syntactic class era word could be predicted correctly over 90% of the umo given only the syntactic class of the preceding word` Interestingly, the correctness of' predictions fell off less than 10% whcn only the current word w~ used. One wonders if this same level of skewncss holds across texts, or (what we will hear) for the continuation of phrases. These results should be helpful in discussing the whole issue of arc orderiog" Implicit in any al~ ordering strategy is the assumption that not all parses of a sentence will be fi)und. Having the "best" path, the parscr will stop wben it gets an acceptable analysis. Arc ordering helps find that "best' path. Marcus [Man:us7g], agreed with the idea of following only a best path, but he claimed that the reason there is no pe~eived backup in 2) is that the human parser is able to look ahead a few constituents iostead of just one s~ate and one eoilstitucnt in making a u'ansition. He claims this makes a more accurate model of human garden path behavior, but it doesn't address the issue of unlimited stuck depth. Here, Church will describe a parser similar in design co Marcus', except that it conserves memory. This allows Church to address psychological facLS not addrc~qed by either Marcus or the ATN models. Church claims that exploiting stack size constraints will incn:ase the cimnces of building a good best path parser. 91 Besides psychological modeling, thcre is also an interest m using thc ATN ft)nnalism for writing and teaching grammars. Paramount here is e:;planation, both of the grammar and its appiicatinn to a particu!ar sentence. The papcr by Kchler and Woods reports on this. Weischcdcl picks a particular problem, responding to an input which the ATN can't handle. He a~,'xiatcs a list of diagnostic couditions and actions with each state. When no pur.xc is found, the parser finds tile last store on the path which progressed the thnhcst d)rongh the input string and executes its diagnostic conditions and actions. When a parser uses ,rely syutactic constraints" one cxpects it to find a lut of parses. UsuuJly the number of parses grows marc than tincarly with sentence length. Thus, for a ~tirly COmlflete grammar and moderate to king sentences, one would expect that the cast of no parses (handled by Wei.%hedcl) would be rare in comparison with the oilier two cases (not handled) where file set of parses doesn't include the correct one, or where the grammar has been mistakenly, written to allow undesired pa!~s" Success of the above eflol'ts to folinw only the best path would clearly be relevant here. No doubt Wcischcdel's proeedure can help find a lot of bugs if die t~t examples are chosen with a little care. Ihtt there is sdll interesting work to be done on grammar and parser explanation, and Weisehcdcl is onc of those who intends to explore it The remaining three papers stem from three separate traditions which reject the strict syntactic ATN formalism, each for its own reasons. They are: i) Semantic Grammars -- the Davidson and Kaplan paper ii) Scmantic Structure Driven Parsing - Wilcnsky and Arens paper iii) Multiple knowledge Source Parsing -- Hayes paper Each of these systems claims some advantage over the more widely known and accepted ATN. The somandc grammar parser can be viewed as a variation of the ATN which attempts to cope with the ATN's lack of semantics. Kapian's work builds on work stancd by Burton [Burton76b] and picked up by Hcndrix et al [ltendrtx78J. The semantic grammar parser uses semanuc in.;tcad of syntactic arc categories. "l'his collapses syntax and semantics into a single structure. When an ATN parsing strategy is used the result is actuall7 ~ flexible than a syntactic ATN, but it is faster because syntactic possibilities are elin'*in;tted by the semantics of the domain. "Ilm strategy is justified m terms of the pcrfum'*ancc of actual running systems. Kaplan also calls on a speed criteria in suggest,og (hat when an unkuown word is cncountcred the system assomc all possibilities which will let parsing prncccd. Theo if more than one possibility leads to a successful parse, the system should attempt to rt,~olve the word fi.trthcr by file search or user query. As Kaplan points nut. d)is trick is not limited to semantic grammars, but only to systems having enough constraints. It would hc interesting to know hOW w(:. it woutd work for systems using Oshcrson's [Oshcrson78] prcdicahility criterion. instead of troth for their scmanocs. Oshcrson distinguishes between "green idea", which he says is silly and "marricd bachelor" which he say~ is just raise. Hc ilotes that "idea is oat green" is no better, but "bac[~ehlr is not married" is fine. Prcdicability is a looser constrain* than Kaplan uses, aud if it would still be cuough to limit database search this wo. "l bc intcrcv;ng, because prcdicability is easier to implement across a broad domain. Wilen~ky is a former stu,:tent of Schank's and thus COlt'*us ffom a tradition which emphastzes sentatmcs over syutax. He ~s right in emphasizing Ore importance of phrase scmantics. The grammarians Quirk aud Grcenhaum [Quirk731 poiut out tile syntactic ,ll'*d semantic importaucc of verb phrases over verbs.- in hngutstms, lhesnan [Ih'csnang0l is developing a theory of Icxical phrases which 92 accounts" by lcxical relatkms between constituents (if a phrase, for many of the phenomena explained by the old transfomtational grammar. }:or example. given 4) There were reported to have been lions sighted. a typical ATN parser would attempt by register manipulations to make "lions" the suhject. Using a phrase approach, "there be lions sighted" can be taken as meaning "exist lions sighted." wl)erc "lions" is an object and "sighted" an object complement "There" is related to the "be" m "been" by a series of relationships between the argumentS of semantic structures. Wilensky appears to have suppressed syntax into his semantic component, and so it will be inrct~ting to sec how he handles the traditional syntactic phenomcna of 4), like passive and verb forms. Finalb, the paper by Hayes shows the influence of the speech recognition projects where bad input gave the Woods A'rN great dimcnlty. Text input is much better than speech input. However, examination of actual input [Malhotra75] does show sentences like: 5) What would have profits have been? Fortunately, these cases are rare. Much more likely is clipsis and the omission of syntax when the semantics are clear. For example, the missing commas in 6) Give ratios of manufacturing costs to sales for plants 1 2 3 and 4 for 72 and 73. Examples like these show that errors and omissions are not random phenomena and that there can be something to the study of errors and how to deal with diem. In summary, it can be seen ~at while much progress has been made in consmtcting u~bic parsers, the basic i~ues, such as the division of syntax. semantics" and pragmatics both in representation and in urdcr uf processing, are still up for grabs. 'l'be problem has plenty of structure, so there is good fun to be had. References [Ikukcr751 [llresnang0] [Burton76aj [Burmn76bl [Hcndrix73] Baker. J.K. "Stochastic Modeling for Automatic Speech Understanding," Sneech Rceoeuition." lnvi[~ ~ Pap~r~ ~ ~ IEEE SvmnosiurTL Reddy, D.R. (E'kt.), ]975. Bresnan. Joan. "Polyadicity: Part I of a Theory of l.exical Rules and Rcpreseflmtions," MI'[" Department of Linguistics (January 1980). Burton. Richard R. and Woods, William A. "A Compiling System fnr Augmented Transition Networks," COLING 76. Burton. Richard R. "Semantic Grammar: An Engineering Technique [or Constructing Natural I~mguage Undcr~tanding Systems," BBN Report 3453, Bolt. Beranek, and Newman, Boston, Ma. (December D76). Hendrix, Gary G. Sacerdoa, E.D., Sagalowicz. D.. and Slocum. J. "l)cveloping a Natural I.anguage Interface to Complex l')ata," ACM l"rans, ~ Dqf.ahase Systems. vo[. 3, no. 2 (June 1978). pp. 105-147. [Kaplan72] [Malhotra751 [Marcus7Sl [O~erson7Sl [Quirk731 IRich751 [Woods70l Kaplan, Ronald M. "Augmented Transition Networks as Psychological Models of Sentence Comprehension," Artificial Intcllieenee, 3 (October 1972). pp. 77-100. Malhotra. Ashok. "l)esign Critcria for a Knowlcdgc-Based English Language System for Management: An Experimental Analysis," MIT/LCS/rR- 1.46, MIT, Laboratory for Computer Science, Cambridge. Ma. (February 1975). Marcus` Mitchell. "A Theory of Syntactic Recognition for Natural l.'mguages," Ph.D. thesis. MIT Dept. of Electrical Engineering and Computer Science, Cambridge, Ma. (to be published by MrT Press). Oshcrson, Danicl N. "Three Conditions on Conceptual Naturalness." Cognition, 6 (197g), pp. 263-289. Quirk. R. and Greenbaum. S. A Concise Grammar o~Ctmiemnorarv F.nnlisll, Harcourt Brace Jovanovich. New York (L973). Rich, Charles. "On the Psychological Reality of Augmented Transition Network Models of Sentence Cumprehension," unpublished paper, MIT Artilicial Intelligence I.aboratory, Cambridge, Ma. (July [97S). Woods. William A. "Transition Network Grammars for Natural Language Analysis" CACM 13. 10 (October 1970), pp. 591-602. 93
1980
24
If The Parser Fails" Ralph M. Weischedel University of Delaware and John E. Black" W. L. Gore & Associates, Inc. The unforgiving nature of natural language components when someone uses an unexpected input has recently been a concern of several projects. For instance, Carbonell (1979) discusses inferring the meaning of new words. Hendrix, et.al. (1978) describe a system that provides a means for naive users to define personalized paraphrases and that lists the items expected next at a point where the parser blocks. Weischedel, et.al. (1978) show how to relax both syntactic and semantic constraints such that some classes of ungrammatical or semantically inappropriate input are understood. Kwasny aod Sondheimer (1979) present techniques for understanding several classes of syntactically ill-formed input. Codd, et.al. (1978) and Lebowitz (1979) present alternatives to top-down, left-to-right parsers as a means of dealing with some of these problems. This paper presents heuristics for responding to inputs that cannot be parsed even using the techniques referenced in the last paragraph for relaxing syntactic and semantic constraints. The paper concentrates on the results of an experiment testing our heuristics. We assume only that the parser is written in the ATN formalism. In this method, the parser writer must assign a sequence of condition-action pairs for each state of the ATN. If no parse can be found, the condition-action pairs of the last state of the path that progressed furthest through the input string are used to generate a message about the nature of the problem, the interpretation being followed, and what was expected next. The conditions may refer to any ATN register, the input string, or any computation upon them (even semantic ones). The actions can include any computation (even restarting the parse after altering the unparsed portion) and can generate any responses to the user. These heuristics were tested on a grammar which uses only syntactic information. We constructed test data such that one sentence would block at each of the 39 states of the ATN where blockage could occur. In only 3 of the 39 cases did the parser continue beyond the point that was the true source of the parse failing. From the tests, it was clear that the heuristics frequently pinpointed the exact cause of the block. However, the response did not always convey that precision to the user due to the technical nature of the grammatical cause of the blockage. Even though the heuristics correctly selected one state in the over- whelming majority of cases, frequently there were several possible causes for blocking at a given state. Another aspect of our analysis was the computational and developmental costs for adding these heuristics to a parser. Clearly, only a small fraction of the parsing time and memory usage is needed to record the longest partial parse and generate messages for the last state on it. Significant effort is required of the grammar writer to devise the condition-action pairs. However, such analysis of the grammar certainly adds to the programmer's understanding of the grammar, and the condition-action pairs provide significant documentation "This work was supported by the University of Delaware Research Foundation, Inc. • "This work was performed while John Black was with the Dept. of Computer & Infor~nation Sciences, University of Delaware. of the grammar. Only one page of program code and nine pages of constant character strings for use in messages were added. From the experiment we conclude the following: I. The heuristics are powerful for small natural language front ends to an application domain. 2. The heuristics should also be quite effective in a compiler, where parsing is far more deterministic. 3. The heuristics will be more effective in a semantic grammar or in a parser which frequently interacts with a semantic component to guide it. We will be adding condition-action pairs to the states of the RUS parser (Bobrow, 1978) and will add relaxation techniques for both syntactic and semantic constraints as described in Weischedel, et.al. (1978) and Kwasny and Sondheimer (1979). The purpose is to test the effectiveness of paraphrasing partial semantic inter- pretations as a means of explaining the interpretation being followed. Furthermore, Bobrow (1978) indicates that semantic guidance makes the RUS parser significantly more deterministic; we wish to test the effect of this on the ability of our heuristics to pinpoint the nature of a block. References Bobrow, Robert S., "The RUS System," in Research in Natural Language Understanding, B. L. Webber and R. Bobrow (eds.), BB~I Report No. 3878, Bolt Beranek and Newman, Inc., Cambridge, MA, 1978. Carbonell, Jaime G., "Toward a Self-Extending Parser," in Proceedings of the llth Annual Meeting of the Association for Computational Linguistics, San Diego, August, 1979, 3-7. Codd, E. F., R. S. Arnold, J-M. Cadiou, C. L. Chang and N. Roussopoulis, "RENDEZVOUS Version l: An Experimental- Language Query Formulation System for Casual Users of Relational Data Bases," IBM Research Report RJ 2144, San Jose, CA, January, 1978. Hendrix, Gary G., Earl D. Sacerdoti, Daniel Sagalowicz, and Jonathan Slocum, "Developing a Natural Language Interface to Complex Data," ACM Transactions on Database Systems, 3, 2, (1978), I05-147. Kwasny, Stan C. and Norman K. Sondheimer, "Ungrammatica- lity and Extragrammaticality in Natural Language Understanding Systems," in Proceedings of the 17th Annual Meeting of the Association for Computational Linguistics, San Diego, August, 1979, 19-23. Lebowitz, Michael, "Reading with a Purpose," in Proceedings of the 17th Annual Meeting of the Association for Computational Linguistics, San Diego, August, 1979, 59-63. Weischedel, Ralph M., Wilfried M. Voge, and Mark James, "An Artificial Intelligence Approach to Language Instruction," Artificial Intelligence, lO, (1978), 225-240. 95
1980
25
Flexible Parsing Phil Hayes and George Mouradian Computer Science Department, Carnegie-Mellon University Pittsburgh. P A 15213, USA Abstract' When people use natural language in natural settings, they often use it ungrammatically, rnisSing out or repeating words, breaking-oil and restarting, speaking in Iragments, etc.. Their human listeners are usually able to cope with these deviations with little difficulty. If a computer system wislles tc accept natural language input from its users on a routine basis, it must display a similar indifference. In this paper, we outline a set of parsing flexibiiilies that :',uch a system should provide. We go, on to describe FlexP. a bottom-up pattern-matching parser that we have designed and implemented to provide these flexibilities for restricted natural lanai.age input to a limited-domain computer system. 1. The Importance of Flexible Parsing When people use natural language in natural conversation, they often do not respect grammatical niceties. Instead of speaking sequences of grammatically well-formed and complete sentences, people Often miss out or repeal words or phrases, break off what they are saying and rephrase or replace it, speak in fragmentS, or use otherwise incorrect grammar. The Iollowing example colwersation involves a number of these grammatical deviations: A: I wmlt., can you send a memo a message to to Smith El: Is Ihal John or John Smith or Jim Smith A: Jim Instead of being unable or refusing to parse such ungrammaticality, human listeners are generally unperturbed by it. Neither participant in the above example, for instance, would have any di|ficulty in Iollowing the conversation. If computers are ever to converse naturally with humans, they must be al)l~, to l)nr.~t~ th~4ir inl)Id :.is ilexii~iy and rni)Izslly ;m htlnmns do. While considerable advances have been made in recent years in applied natural language processing, few el the systems thai have bean constructed have paKI 5uificien, uttenlion In Iho kinrIs el devialio=l that will inevitably occur =u~ their ulq)ul if (f)ey are tlsed In ,'~ natural environment. In many cases, if the user's tat)tit (ions sol COlllefnl to tile sysh.~m's grammar, an in(iication of incomprnllermanl) followed by a rerluest to rephrase may be Ihe best he (:a=~ P~xt~¢~(:l W(; ht~.liP.vt. • Ihat .~uch ,fllexibili!y i. parsing severely limits Ihe practicality O| natLiral language contpuler hderl:~rces, an(| is a major roasell why nalar~d language tlaa yet to find wide acceptance in sucl~ ;tpplications as database retrieval Or interactive carom{rod langut,.ges. In this paper, we report on a flexible parser, called FlexP, suitable for use with a restricted natural language interlace to a limited-domain counputer system. W~. describe first the kinds of grammatical deviations we are trying Io deal with, then the basic design decisions for FlexP with juslificalion for them based on the kinds of problem to be solved, and finally more details of our parsing system with worked examples of its operation. These examples,and most of the others in tl~e paper, represent natural language input to an electronic mail system that we and others [1 I are constructing as part of our research on user interfaces. This system employs FlexP to purse ils input. 2. Types of Grammatical Deviation There are a number of distinct types of grammatical deviation and not ;ill lypt~; ;|r~ tl)tOll~l it1 ;Ill Iypes of COlnlnunicatJon siltiation. In tllin so;cites. we first define the restricted type el communication situation that we will be concerned will1, thai of a limile~-I-domain computer system and its user communicating via a keyboard and (hsplay screen. We then present a taxonomy of grammatical deviations common in this context, and by implication a set el parsing flexibilities needed to dealwith them. 2.1. Communicalio0t withaLimited-DomainSystem In the remainder of this paper, we will focus out a restricted type of canto)unitarian situation, that between a limited-domain system and its user, and on the p:trsing flexibilities neede(f by suuh a system Le ColJe with the user's inevitable grammatical deviations. Examples of the type of system we have in mind are data-b;~e retr0eval systems, electroa)ic mail systems, medical diaunosis systems, or any systems operating in a domain so rE'stricted thai they can COmpkHely understand ;311y relevant input a user might provide, In short, exactly the kind O! system that is normally used for work in applied natural Imtguage processing. There are several points to be made. First. although ,~uch systems can be expected to parse and understand anythi,lg relevant la their domain, their users cannot be expected to confine tllemselves to relevant input. As Bohrow el, al. 121 .ale. users oflcn explain Iltl~ir underlying motivations or olhorwzse jt=nlify their l(~(Itli'.%l,'~ ill l(~llnB ~Itlih~ ilr(!l~v;ilil Ill lh(!' (i()lnain ()fth(: ~yst~in. ]'hit ro,~tlJ| is lhal slJch systems cannot expecl Io parse ;.dl llx~il inlnH .,:vun wdh lhe use of flexible parsirx.j lechniqq.. Secondly. a flexible parser is just purl of the conversational comporient of such ;,I system, ai'id cannot solve all parsi,g problems by itself, For example, il a parser can extract two coherent fragments train an otherwise incomprellensible input, the decisions about what Ihe system should next must be made by another component of the system. A decision on wllether to jump to a conclusion about wllat the user intended, to present him with a set of alternative interpretations, or to profess total confusion, can only be made with information about the Itistory of the conversation, beliefs about the user's goals, and measures of plausibility for any given action by the user. See [7~ for more discusSion o| Ihis broader view of graceful interaction in man-machine communication. Suffice it to say that we assume a flexible parser is iust one component of a larger system, and Ihal any incomprehensions or ambiguities that it finds are passed on to another component of the system with access to higtler-level information, putting it in a better Position to decide what to do next. Finally, we assume that, as usual for such systems, input is typed, rather than spoken as is normal in human conversations. This simplifies low.level processing tremendously because key-strokes unlike speech wave-farms are unambiguous. On the other hand, problems like misSpelling arise, and a flexible parser cannot assume thut segmentation into words by spaces :Slid carriage returns will always be corr~:t. However, such input is stilt one side of a conversation, rather than a polished text in the manner of most written material. As such, it is likely to contain many of the same type of errors normally found in spoken conversations. 2.2. Misspelling Misspelling is perhaps the most common form of grammatical deviation in written language. Accordingly. it is the form of ungrammaticality that has been dealt wdh the most by language processing systems. PARRY J t I J. I.II'E[1 Jl~ I. ;taxi tlumernus olher systems have tried te correct misspell i.p0Jt from their users. llhis n(,:'£a,mch w;l~ Sll~ll~.i~tl by IIH~. A. ll,ce OliVe uI SCI~IlliIic nl!s('lllc:h till(Jilt" 97 An ability to correct spelling implies the existence of a dictionary of correctly spelled words An input word =tot fot.ld m the dictionary is assumed to be misspell and is compared against each of the dictionary words. If a dichonary word comes close enough to the input word according to some criteria of lexical matching, it is used in place of the input word. Spelhng correction nloy be attempted in or out ol COntext. For instance. there is only one regson~.lble correction for "relavegt" or Ior "seperate". l)td Ior all mlitlI like *'till" SOltle k.'~d at conlext is typlc;.dly ilecossory as m 'TII see yet= tm April" or "he w;.tS shot will} ltle stolen till." In ellect, c(}lltexl c;in Lx.. t !.lse(I to rc(ttlCO tile size Oi Ihe diClll)ltaly tO i}e searched for correct words. )'his lJt}lh n}akl,=s Ihe seuich inure t:|ficlent al}d red}ices tile possibilily el nlullll)le Ill.:ll(.;hus OI Ihe input ;.tgalllSt life LliCtiOI}afy. The LIFEF1 {UI sysletn uses tile strong cun:;tralnIs typically llrovlde~ by its SCII};.n}IIC gl;nnlnal if} IhlS way to r(.'~Iuc(3 tile range el possibilities Ior spelling correction. A particukvly troublesome kind of spelling error results in a valid word different from the one intended, as in "show me on of the messages". C|Parly. ~lich on error colt only t~e corre(;It~l Ihrotlgh cI)nlp;Irison against -'. contextually determined vocabulary. 2.3. Novel Words Even accomplished users Of a language will sometimes encounter words they do not know. Suci} situations are a test of their language learning skills. If one (lidn'l know tile word "fawn". one could nt least decide it was a cotour from "a fawn COlOUred sweater". If one just knew Ih~ wur(J il~ lulOf Ilia lu ~.t young (.IL~I. one nllgh[ CgllcJud(J thai II was L~ll~lg used to mean tile colour of a young deer. in general, beyond making direct inferences t}bout the role ol unknown words from their immediate context, vocabulary learning c:~l require arbitrary amounts of real*world knowledge and .derence. and this is certainly beyond the capabilities Of present day altificial intelligence techniques (though see Carbonell [4} for work in this direction). There is. however, a very common special subclass of novel words that is well within the capabilities of present day systems: unknown proper names. Given an appropriate context, either sentential or discourse, it is relatively straightforward to parse unknown words into tile names of people, places, etc. Thus in "send copies to Moledes.ki Chiselov" it is reasonable to conclude Iron} the local context that "Moledeski" is a first name. "Chiselov" =s a suman~e, and together they identily a person (the intended roe:pit'.hi of the copm~5). Strnt~gles like this were used in the POLITICS [St. FRUMP 16J. and PARRY 11 I I systems. Since novel words are by definition not in the known voc=bulary, how can a parsing system distiogt,sh them from misspellings? In most cases. the novel words will not be close enough lo known words to allow SUCCeSSful correction, aS in the above oxamole, bul this is not illways true; an unknown first name of "AI" COUld easily be corrected to "all". Conversely, it is not s~te to assume that unkl}own words ill contexts which allow proper names are re;.}lly proper names as in: "send copies to al managers". In this example. "or" probably should be corrected to "all". In order to resolve such cas~. it may be necessary to clleck ;}gainst a list of referents lor proper nameR, if this is known, or otherwis(~ to consider such factors aR whelher tile inlli;ll letters of Iho words are capilalized. AS lar as we know. no systems yet constr,ctc<t have int~jroted their handling of mi.~spclt wortl.q iln(t unknown, proper nanl~"s Io Ihe degree oullined ;.Ifl¢)v~.,. However, It}t~ COOP 19l .~,y,,it{~ln allows sysllHllnlic access In a dat;.i llaSt. • (:Ulllailllll~j |)lOller ii;nnes wllhotll Ihe ni'~L~t Ii)l ilICitlSlOll of Ihe words ,1 Ihe system's ilnrsing vocabulary. 2.4. Erroneous segmenting markers Wntten text is segmented into words by spaces and new lines, and into higher level units by commas, periods and olher punctuation marks. Both classes, especially the second, may be omitted or inserted speciously. Spoken laf~gtJago s a so segmented, but by the Clt,te different markers of stress, interaction and noise words and phrases: we will not cons=der those further here. IncorreCt segmentation ;ll the lexical level results in two or more words being run togetl)er, as in "runtogether". or a single word being split up into two or more segments, ns in "tog ether" or (inconveniently) "to get her". or combinations of these effects as in "runlo geth el". In all cases, it seems natural to deal with such errors by extending the spelling correction mechanism to be able to recognize target words as initial se(jments of unknown words, and vice-versa. AS far as we know. no current systems deal with incorrect segmentation into words. The other type of segmenting error, incorrect punctuation, has a much broader impact on parsing methodology. Current parsers typ;catty work one sentence at a time. and assume that each sentence is terminated by an explicit end of sentence marker. A flexible parser must be able to deal with Ihe potenliai absence of such a marker, and recognize the sentence boundary regardless. It sllould also be able to make use of such punctuation if il is used correctly, and to ignore it if it is used incorrectly. Instead of punCtuation, many interactive systems use carriage-return to il~'Jicale sentence termination. Missing sentence terminators in this case correspond to two sentences on one line. or to the typing of a sentence without the terminating return, while specious terminators correspond tO typing a sentence on more than one line. 2.5. Btokon-OflandRestaHodUtferallcas In spoken language, it is very common to break off and restart all or part of an utterance: I want to -- Could you lell me the name? Was tile man --er-- tile ofliciol here yesterday? Usually. such restarts are sKjnall~l in some way. by "urn" or "er". or more explicitly by "lers back tip" or some si,,Ior phrase. In written language, such restarts do not normnlly occur because they are erase(l by lhe writer bolore the reatler sees Ihenl. interactive COmputer sysle--n~ typically prpvide facilitios for Iheir users tO delete the last cllorocler, word. or ctlrletlI hno as Ihotlgh ii had never been typed, for the very purpose of allowing such restalts. Given these signals, tl~e lustarIs aru ~Jasy Io (letecl anti inlerpr(;I. However. sonle|inlL'bs tIS(~rs I:lll to make use ol Ihese s=gnals. Sometimes. for instance, i~lptlt not containing a carriage-return can be spread over several lines by intermixing of input and output. A flexible parser should be able to make sense out. of "obvious" restarts that are not signalled, as in: delete the show me aU the messages from Smith 2.6. Fragmentary and Otherwise Elliptical Input Naturally occurmg language often involves utterances that are not complete sentences. Often the appropriateness of such fragmentary utterances depends oil conversational or physical context as in: A: Do you mean Jim Smith or Fred Smith? B: Jim A: Send a message to Smith B: OK A: with copies to Jones A flexible parser must be able to parse such fragments given the appropriate context. There is a question here of what such fragments should be parsed into. Parsing systems which have dealt with the problem have typically assumed tl it such inputs are ellipses of complete sentences, and that their parsing involves finding that complete sentence, and pursing it. Thus the sentence corresponding to "Jim" in the example above would be "I moon Jim". Essenhally this view has been taken by the LIFER [81 and GUS [2l systems. An alternative view =s that such fragments are not ellipses of more complete sentences, but are themselves complete 98 utterances given tile context in which they occur, and sholdd be parsc<l as such. We have taken this view in our approach to flexihto parsing, as we will explain more fully below. Carbonoll (personal communication) suggests a third view appropriale for some fragments: that of an extended case frame, hi tile second examt.lle above, for instance. A's 'with copies fo Jones" forms a natural pint ul the c=ts~.' Irame est~.lblish~t fly "Self(| a message to .~;mith" Yet :molh~.,r approach to Ir~lgmnnt l)ar:;iflq is taken in the PLANES system ~ 12[ which always parses in terms el major fragments rather than Complete utterances. This technique relies on there I~ing only one way to combine Ihe fragments thus obtained, whicll may he a reasonable aSs|lnlptJon tar ill;.iny limited clara;rot systenls. Ellipses call ulna occur without regard Io context. A type Ihal inleract=ve .';yshtms are paHK:uhtrly likely 1o I:.lce is cryl)licness in which ;irhcles :tnd fdh(~r nOll-e~.~.%enlJ;iJ words are entitled ;is ill ":;how nleSS;.IgOS alter June 17" inste.;p.I ol the m¢lre complete ".,;how me all mesnacles dat(.~l after June 17" Again, tiler(: is a question of whether to consider Ihe cryptic tnl)LII cunlpluh~, which would me~fn inodJlying file system's urzmmmr, or whether to consider il ellil}tical, and cnmplele it by using Ilexlble techniques te parse if against the comply.re versioll as it exisls in Ihe standard gr;Inlnlar. Other cam;non forms of ellipses are associated with conjunction as in: John got up and [John] brushed his teeth. Mary saw Bill and BIll {sawl Mary. Fred recognized [Ihe buildingl and [Fred[ walked towards the building. Since conjunctions can support such a wide range of ellipsis, it is generally impractical to recognize such utterances by appropriate grammar exlensions. Efforts to deal with conhlnctJon have Iherefore depended on general mecllanisms which supplement the basic parsing strategy, as in fhe LUNAR system [fSl, or wilich modify the grammar temporarily, as ill the work el Kwasny and Sondheimer I IOI. We have not attempted 1o deal wilh tills type of ellipsis in our parsing system, and will not discuss further the type at flexibility it requires. 2.7. InierjectedPhrases, Omission, and Substitution Sometimes people inlorject noise or other qualifying phrases into what is otherwise a normal grmnmatical flow as in: I want the message dated I think June 17 Such interjections can be inserted at ahnost any point in an utterance, and so must be dealt with as they arise by flexible techniques. It is retahvely straightforward for a system of limited comprehension to screen out and igfloro standard noise phrases such as "1 think" or "as lar as I can tell". More troublesome are interjections that cannel be recogni,~ed by the system, as might for instance be the case in Display [ju.'~I to relre:;h my memory I the message dated June 17. I want to see tile message {as I forgot what it saidJ dated June 17. where the unrecognized intefiections are bracketed. A flexible parser should be able to ignore such interjections. There is always tile chance that the unrecognizc~t part was an important part of what tile user was Iryillg In say, bl.fl clearly, the problems that arise from tills c;.tnllot be handlml by a parser. Omissions of words (or phrases) from the input are closely related to cryptic input aS discussed above, and one way of dealing with cryptic IflpLll in to treat il as a set of omi.~,~ions. However, Jn Cryptic input only iness~.*fdi~d ifdormaliOll is missed oul. while it is cooceivable thai one could also onlit essential ifllormation as ill: Display Ihe men,age June t 7 Herr~ it is unclear whether tile Si)e[lker illeans a ines.,Ja(le dated ell ,hlne t f or b*:lore Juno 17 or ;liter June 17 (we assume that the system addfessc~t Calf di.~;t)lay lhilt~ts illlfn(.~lJately, or i1ol at all). If aft onlission can b~ i1;llrowl~(I (l()Wll ill IhJs w;ly, tile I);fr.°,l?r nllnldd he. • ;it)k. TM tO gE,itf'!r;llP :ill tile alfern~diven liar c¢lnh~xtual resohllinfl nf the ambiHllily or for the basis of a (lllesti(lll Io tile us¢.~r). If tile omis.'~inn can be narrowed down to one ;llh.~rn;llive fhell tile illl)tlt was flleloly CI yl)tic. Besides omitting words and phrases, people sometimes substitute incorrect or unintended ones. Often such substitutions are spelling errors and should be caught by Ihe spelling correction mechanism, but sometinles they are inadvertent substitutions or uses of equivalent vocabulary not known tO the system. This type of substitution is just like an omission except that there is an unrecognized word or phrase in the place where tile omitted input should have been. For instance, in "the message over June 17", "over" takes the place of "dated" or "sent after" or whatever elst: is appropriate at that point. If the substifution is of vocabulary which is appropriate but unknown to the syslem, parsing o| substihlted words can provide tl~e basis of vocabulary extension. 2.8. Agreement Failure It is not uncommon for people to fail to make the appropriate agreement between the various parts of a noun or verb phrase as in : I wants to send a messages to Jim Smith. ]'he appropriate action is to ignore the lack of agreement, and Weischedel and Black [13J describe a melhod for relaxing the predicates in an ATN which typically check for soch agreements. However, it is generally not possible to conclude locally which value of the marker (number or person) for whicll the clash occurs is actually intended. We considered examples in which the disagreement involves more than inflections (as in "tile message over Jr,he 17") in the section on substitutions. 2.9. Idioms Idioms are phrases whose interpretation is not what would be obtained by parsing and interpreting them constructively in the normal way, They may also not adllere to the standard syntactic rules. Idioms must thus be parsed as a whole in a pattern matching kind of mode. Parsers based purely oil patlern matching, like thai el PARRY I I t J, titus are able to parse idioms naturally, while others must eifher add a preprocessing phrase of pattern matchimj as in tile LUNAR system [15~. or mix specific patterns in will1 more general rules, as in Ihe work of Kwnsny and Sondheimer [10]. Semantic grammars [3, 81 provide a relatively natural way of mixing idiomatic and more general patterns. 2.10. User Supplied Changes In normal hunlall conversalif}fl, once SOme;Ihing is said, it is suid and c;.tllnOt be ch,lnul.~t, excl;pt indirectly by more words wlfich refer Uack to tile original ones. In inleractively typf.~l lie)at, there is alwayS the possit)ilily thai a user nlay notice ;.in error he has made ;.ind go back an(I correcl it hmf.~(:ll, wilhoul wading for the :wstem to ptlrslle =Is own, possibly slow and inef[e(:tive, motile(Is el correction. Wilh appropriate editing lacilities, Ihe user may do this wilhoul erasing inlervening words, alld, if |he system is processing his input oil a word by word basis, may 3. An Approach to Flexible Parsing Most current parsing systems are unable to code with most of the kinds of grammatical deviation outlined above. This is because typical parsing systems attempt to apply their grammar to Illeir input in a rigid way, and since deviant input, by defimtion, does not conform to the grammar, they are unable to produce any kind of parse for it at all. Attempts to parse more flexibly have typically involved parsing strategies to be used after a tog-down parse using an ATN It4J or similar tran~lion net has failed. Such efforts include the ellipsis and garapllrase mechanisms of LIFER [81, tile predicate relaxation techniques of Weischedel and Black [13J, and several of the devices for extending ATN's proposed by Kwasny and Sondheimer [ 101. thus alter a word that the system has already processed. A flexible parser must be able to take advantage of such user provided corrections to unknown words, and to prefer them over its own corrections. It must also be DreDared to change its parse if the user changes a valid word to another different but equally valid word. 99 We have constructed a parser, FlexP. which can apply its grammar tO its input flexibly, and thus deal wdh the grammatical deviations discussed in the previotls sechon We shotdd empllas~;~e, however, that FlexP is designed to be used in thu lltturluce to a restncted-domain system AG such. it is intended to work Irom a domuilt-sDecific semantic grammar. rather titan one st.tuble Ior broader classes of input. FlexP thus does not embody a solutloll for Ilexible parsing of natural language in general. In describing FlexP. we will note those of its techoiques that seem unlikely to scale up to use with more complex grammars with wider coverage. We have adopted in FlexP an approach to flexible parsing based not on ATN's. but closer to the pattern-matching purser OI tile PARRY system [11J. possibly tim most robust parser yet constructed. Our approacl~ is based on several design decisions: • bottom up rather than top-down por~ing: This aids io the • Parsing el fragmentary utterances, un(I in the r~rxll.li¢,l nf interjechonR alld restarts. • pattern matching: 1 Ilis is essential Inr idioms, and also aids in tile ilelection n! omissions and sobsMutions in non-i(limontic phrases. • parse suspension and conli,luoiion: Thu ;tt)ilily to F.uspelld it I);Irse and letter re.~Lin|e il.'; I)rocnRsilU,| i~ illtllortant for intorlections, restarts, and non-explicit terntinolions. In the remain(ler of this section we examine and juslify these design decisions in more detail. 3.1. Bottom-Up Parsing Our choice of a bottom-up strategy is based o, our need to rocu~jnize isolated sentence Iragments. If an utterance which would normally be considered only a fragment of a complete sentence is to be recognized top-down, there are lwo approaches to take. First. the grammar can be altered so that Ihe fragment is recognized as a complete ulteraoce in its own right. This is undesirable bee;ruse it can cause enormous exp;msion of the grmnmar, and because it becomes difficult to decide whether s fragmeot appears in isolali~ or as port OIa larger utterance, especiully if the possibility of missing end of sentence markers also exists. The second option is for the purser to infer from the convers;ttidnal context what grammatical sub-category (or sequence of sub-cate(jories) the fragment might fit Dnto. and thee to do a top-down parse tram that sub-category. This essentially is tile tzlctic used in the GUS [21 and LIFER lot systems. This strutegy =s clearly better than the first one. but has two Problems; first of predicting all no.ss~ble sub-categories which might come next. and secondly, of inefficiency if a large number are predicted. Kwosr.y and Sondheimer I10] use :. combination of the two strategies by temporarily modifying an ATN grammar to accept fragment categories as complete ulterances at the braes they are contextually predicted. Pattern-uP Doming avoids the problem of predicting what sub-categories may occur. If a fragment filling a given sub-category does occur, it is ~3rsed as such whatever the context. However. if n given input can be p.'~rsed as more thon one sub-category, the bottom-up approach would llave to produce them all. even if only one would be predicted top-down. In a syslem of limited comprehension, fragmentary recognition is sometunes necessary because not all of an input con be recognized, rather tilan because el intentional ellipsis. Here. it is probably in)possible to make pte(tictloos altCI bottom-up pursing is tile ()lily toothed that is likely to work. As described below, boltom-up stnltegms, coupled with suspended purses, are also helphrl in recognizing mteqections and restarts. 3.2. PatternM~tching We have chosen to use a granlnlar of linear I);lltorns rntller thao a ITuiiSlllOn network boc;.ttl..;e palterll-nl{llChlllg ineshus well wllll I)olJoln.up purSlllg, bec;.itise it f;.1ciIitutes reco~l|lllOiI (11 UIIuI;uIcuS wilh nllli.%sioIl.~ ;|llt| SUbStitutiOnS. ;|ll(i [~3cause it is I~eces.~.ury ;.lllyw;ly l~Jr tile lecogndion oi i(tidm;itiC phrases. TIIu (.}r31lllil;.t; oJ the parser is ;.= SOt of rewrde or I)roduCtlOIt rlllt~$ whose tell h;.u)(I :role is ;.t til)(.l[il II;.l|tL=fn Of COil:;llttlHIttS (ll;XlL;;.ll ()1 hl(Ih(}l k:vel) ;tltll wllose right hand side derides a result constWJi}ot. Elenleots el the pattern may be labelled opholsal or allow for repeated matches, We make the assumption, certainly true Ior the grammar we are presently working with. that the grammar will be semantic rather than synt{tctic, with patterns corresponding tO idiemntic phrases or to object and event descriph~,ls meonulgful it) some hmitod domain, rather than to general syntactic structures. Linear patterns fit well with bottom-up parsing because they can De indexed by any of their components, and because, once indexed, it is straiglltforward to confirm wl)ether a pattern matches input already processed in a way consistent with the way II~e pattern was indexed. Patterns help with rite detection of omissions and substitutions because in either case the relevant pattern can still be indexed by the remaining elements that appear correctly in the input, and thus the pattern as a whole can be recognized even if some of its elements are missing or incorrect. In the case of substitutions, such o technique cnn actually help locus the st~011ing correction, proper name reco(jnition, or vocabulary learning techniques, whichever is appropriate, by tsolahng the substituted input and the pattern constituent which it should have matched. In effect. this allows the normally bottom-up parsing strategy to go top-down to resolve such substitutions. In normal left to right processing, it is not necessary to activate all the patterns io(lexed by every new word as it is COnSidered. If a new word is accounted lot by a pattern that has already been partKflly matclled by previous input, it is likely that no other patterns need to be indexed and mulched Io~" thai input, ll)ts heuristic Plows FlexP's pursing algorithm to limit the number of patterns it toes to ntatch. We should emphasize. however, that it is a I'.ettr|stic. and while it has caused us no trouble with the limited*domino grammar we have been using, it is unclear how well it would transfer to a more complex grmnmar. FlexP's algorithm does. however, carry along ntultii)le partial par.."~es in other alliblguOUS cases. removing tile need for any backtracking. 3.;3. Parse Suspension and Continuation FlexP employs the technique of suspending a Parse with the possibility el later cominualion to help with the recognition of inlerlecliofls, restartS. and implK, il termlnatio,s. Tile I}arsmg algurittun works tell to right in a t}re:tdlh-lir.qt retainer. It ntainlui=is a set of p;Irtiu! parses, each el which ~tccotlnts for Ihe input ulre~lty proces.=~..(t but riot yet accot.llod lot by .'1 COmpleted pari.;e. The purser attempts to incorporate o~tch new input into each of Ihu P;trtial p~.~rsOs. I{ Ihis is successful, the t)artiul parses are exleniled al~l lil:ly irlcreos~ or decrease ill ittinlber. If no partial purse can be extendo~t, the entire set is ~.lVed as a SUspended parse, There are several possible explanations for input mismatch. Le. the failure o! tile nex! input tO extend a parse. • The input could be an implicit terminal=on, i.e. the start of a new top-level utterance, and the previous utterance should be assumed complete. • t he: Inp¢ll ¢util~.i b~J a reslart, m whlcll case li.e active Parse should be abandoned and a new parse starte(I Item that point. • The input could be the start of an interjection, io which case lhe actwe parse should be temporarily suspended, and a new mtrse started for the intorlection. It is not possible, in general, tO dL~tmguish between these cases at the time tim mismatch occurs. II the active parse is not at a possible termination Point. then input mismatch cannot indicate implicit 100 termioation, but may indicate either restart or interjection. It is necessary to suspend the active parse and wuit to see if it is continued at the next input mismotclt. On the other hand. if the active parse is at a possible termination point, input mismutch does not rule out interjection or even restart. In this situation, our algorithm tentatively ussumes that there has been an implicit termination, but suspends the active parse anyway for subsequent potential continuation. Note also that tl~e possibility el implicit termination provides justification for the strategy of interpreting each input immediately it is received. If the input signals an implicit termination, then the user =nay well expect the system to respond immediately to the input thus terminated. 4. Details of FlexP This section describes how FlexP achieves the Sex=bit=ties discussed earlier, The implementation described is being used as the parser for an intelli(jent interface Io ;i multi-mediu message system [ 1 ], The intelligence in this interface is cnncentrated in u tl.ser A(lent whictl =ned=sites between the user and the underlying tool System. The Agent ensures that the interaction goes smootlfly by, amoog other things, checking Ihat tile user specifies the operations he wants performed and their parameters correctly and uuumbiguously, conducting a dialogue wilh the user if prohlems arise. Th(: role el FlexP" us tile Agent's parser is to transform the user's input into the internal ropresenlutions employed by tile Agent. Us.idly this inl)ut is a re(Itlest for aclio, hy the to(ll or a description of obiects known to the tool. Our exzmq=les are drawn from that context. 4.1. Prolimi.aryExample Suppose tile user types display new messages Interpretation begins as soon as any input is available, The first word is used us an index into the store of rewrite rides. Each rule gives a pattern and u structure to be pr=xlu(:od when lira pattern is matcherf. The components el the structure ure built from the structures or words which match the elements of the pattern. The word "display" indexes the rule: (pat.=.or.: (I)isplay Message Descript. i.on) result,: J SLrucLureiype: OperaL ionReques IL OperaLion: Display Message: (Fit let Messagel)escr'ipLion)] Using this rule Ihe parser constructs the partial parse tree (Display MessageOescr ipt io.) I I display We call the partially-instantiated pattern which labels the zipper node a hWJothesis. It represents a possible interpretation lot a segment of input. The next word "new" does not directly match the hypothesis, but since "new" is a MsgAdj (an adjective which can modify a description of a message), il indexes the rule: (paLLm'n: (?l]et *MsgAdj Msgllead *MsgCase) resulL: J St.ruc L.ro I yl)e: MessageDescripL ion Cnllq)O,e, LS : ............ ] ) Here. "?" means optional, and ..... means repeatable. For the sake Of clarity, we have omitted other prefixes which distinguish between terminal and non-terminul pattern elements. Tile result of this rule fits the current hypothesis, so extends the purse as follows: (t) isliIny Messagel]escr ip L ion) I I I I J (?DeL *MsgAdj Msgllead °MsgCase) I 1 i I it=splay new 1 he hypolhesis is not yol hdly conlirm(.,d evq.,n Ihuugh all tht; elements ore It|arched. It.~ !;l~(:i)ll¢l t~lt~ltll~lll n=all.~he."~ ;tlnlthq~r h~w~r I,~vt~l hypothesis which is ooly iucompletoly matclled. Thi.s lower putluH= I)ut:ulnus Lime clirr(,rlt hw~=lHIt!:;t:; b~c;.lus~; il pledicts whal should COllie iit.~x[ ill the illput stream. The third input m;.dcho.,; Ihe C;.It(~tlory M:;gl-lead (head noun el a met.sage (lest:Silltion) and so lits tile current hypothesis, This match lills the lust non-oplional slot in Ihut pattern. By doing so it makes tile current hypothesis and its parent pattern potemia/ly complete. When the parser finds a potentially complete phrase whose result is of interest to the Agent (and the parent phrase in this example is in that category), the result is constructed and sent. However. since the p;irs~,r has not seen a lomlination signal, this purse is kepl u(.,hvu. Ihu iiq)ut 5,;us su lur may be only a prefix Ior some longer utterance such as "display new messages about ADA". In this case "ubout ADA'" would be recognized as a match for MsgCase (a prepositional phrase that can be part of a message description), the purse would be extended, and a revision of the previous slructul'e sent to the Agent. 4.2. Unrecognized Words When an input word cannot be found in the dictionary, spelling correction is attempted in a background process which runs at lower priority than the parser, 1"he input word and a list at possibilities derived front the current hypothesis are passed as arguments. For example: display the new messaegs produces lhe partial parse (Display MessageOescrip =.ton) I I I I I ( ?Pet "MsgAdj Msgtlead °MsgC use ) I I I I t I display Lhe new The lower pattern is the current hypothesis and has two elements eligible to match the next input. Another Ms(JAdi could be matched. A matcll for MsgHeud would also lit. Both elements have associated lists of keywords known to occur in phrases wl~ich match them. The one for MsgHead inclu(les tl~e word "nt~.~os,ages ''. and the spelling correcter passes this back to the purser as the most likely interpretation. In some cases the spelling correcter produces several likely alternatives. The parser handles such alnhiguous words using the same mecllanisms which ucconlmotlate phrases with ambiguous interpretations ]'hut is. ulternative interpretations are curried altJng until Ihere is enough input to discriminate those which are pla.sible from those which are not. | lie d~.,tails ira: given in the n~:xt section. The user inuy also corrl:ct Ihe input taxi himself, These changes are hundle~l in ilnlch the S;llno way as those proposed by Ihe spellillg correcter. Of course, thes~ u'.~.r-suppliot ch;ingos ure given priority, and Ililrs=..'s built u~.allg Ihe formal ver'.;iun musI lxJ mlv.lili.~l or discarded. Spellimj correction is run as a separate, lower priority process because it reusonublo parse may be produced even without a proper interpretation for the unknown word. Since spelling correction can involve rather time-consuming searches, this work is best done when the parser has.no better alternatives to explore. 4.3. Ambiguous Input In the first example there was only one I~ypothesis about the structure Of the input. More generally, there may be several hypotheses which I)rovide competing interprelutions uboul what has already been seen and whal will appear =text. Until these p~lrtial parses are Iound to be inconsistent with the actual input, they are carried along as part of the ~zctive purse. Therefore the active parse is a set at partial purse trees each 101 efficiency required for real-time response, but could conceivably fail to find appropriate parses. We have not encountered such circumstances wilh tile s=nall domain-specitic semantic grammar we have been using. 4.4. Flexible Matching rl+e oaly Ilexibiltty described so lar *s that allowed by the optional elements el patterns, II om~ssions can be anttcipLIte(I, allowances trlay be built Ilil(= the grammar. In Ihi$ sechon we show how other OlnissiOI1S may h~ lUllittl(;~t ;tnlt Olhee Ilexitiililles achit=ved by ~j|low,ncj ;t(J('liliontil freHtlom in the wtw an item is allowed tO matcI1 a pattern. Ihere are two ways in with a top-level Ilypothesis about the overall structure at the input so far anti a curr~nt hyl)othesis concerning the next input. The actual mlplementation allows sharln(j of COnln)OII structure alnOllg competing hypotheses and so =S more ollic=ent than this descnption suggests. The input were there any messages on ....... could be completed by giving a date ("+..on Tuesday") or a topic ("..+on ADA"). Consequently, the sub-phrase "any messages on" results in two partial parses: ( ?De L "MsgAdj Msgllead °MsgCase ) I I ] I I I any messages (On DaLe) I I on a.d (?De(. "NsgAd.j Hs,jllead *NsgCase) I I I I I I a.y messages (O. TOpiC) I I on II 1110 next inptll were "Tttesday" it wold(| be consislenl will1 Ihe tirst parse, I)lll nnf the necond. Shice one ol the [tJlOrn;itlVeS (|DOS ~lccount tar the lilt)el. Ihoso thai do IIOi may I)~J (tisc;tr(Ic'(I. On IhP. oilier liimd, it :Ill tile i):.lrti~.lt |):.|ISI!',.~ tilll tO Ilt;.lt(:h lilt. = in|lilt. Oll'~t.~l ;tctiol~ iS t;,tkoll. We consi(tor 511Ch L~IIU;Iil(+IIS ill the S(.~llOl) UII suspol+th.~,l fxlrses. AS ~ tjeltur[tl str:.ltegy, we carry seVel :.11 linssitile inlerl)retallOltS only as kintj ;I.~ thert! is 11o clear lit;st ;.lllernalive. II1 l):.lrlictllar r'~o fh~xible parsing| t*.,chniqueS are us~t to suttl)ort parses Ior which th,.=re are pl-'tuszblo ;alternatives tmt|or normal imrsing. This heuristic helps achieve 11)0 wlllch the malching crilerla may be relaxed, namely • relax consistency constraints, e.g. number agreement • allow out Of order matches Consff;lency constraints are predicates which are attached to rules. They assert relationships which must hold among the items which till the pattern Fhese constraints allow contexl-sensilive constructions in the gramnmr. Such predicates are commonly used for simdar purposes by ATN parsers 1!41 and the flexibility achieved by relaxmg these constraints has been explored belore 113J. The tochmque fits smoothly into FlexP but has no1 ;icttJally been needed or used in our current application. On tDe other hand. nut of order matching is essential for the 13arser's aPliroach Io errors Of OlniSSlOn. transposition, anti substitution. Even wilel~ strictly Iltl(.=liir~l~(J. several eielnents el ~ t);JllC'rll may tie elk llbie to match lhe next input item. For example. =n the pattern for a MessageOescription (?DeL "MsoAd j Msgllead "MsgCase) each at the lirst thre~, etemetlts is indi;dly eligible but the lasl is not. On the otilt.~r h;ind+ uncu Msullead it;is I~.'cn mLttclie(I Dilly tile last elenlelll iS eligible trader tile strict interpretation ot the pattern. Consider the input dlSpl~ty new ;|i~1.11 A[')A Tile I.~;t Iwo words p;.~rse normally to produce (Display MessageDescript. ion) I I I I I ( ?Pet "NsgAdj NsgHead "MsgCase ) I I I I display new The next word (foes not fit that hypothesis. The two eligible elements predict either another message adjective or a MsgHead. The word "about" does not match edher ot these, nor can the parser construct any path to them using intermediate hypotheses. Since there are no other partial parses available to account for this input, and since normal matching tails, flexible matching is tried. First. previously skioPed elements are compared to the input. In this example, the element ?Pet is considered but does not match. Next, elements to the right of the eligible elements are considered. Thus MsgCase is considered even though the non-optional element MsgHead has not been matched. This succeeds and allows the partial parse to be extended to (Display MessageDescript~on) I I I I I (?gel °Msg^dj Msgtlead "MsgCase) I I I I J (AbouL Lop ic) I I I display new about which correctly predicts the final input item. Unreeocjnizable substitutions are also handled by this mechanism. In the pll ra.se display the new stuff aboul ADA the word "stuff" iS not found in the dictionary so spelling correction is tned but does not produce any plausible alternatives. While spelling correction =s underway, the remaining spurs can be parsed by siml~y omlthng "stuff" and using the flexible matching proce<hJre. Tr;.lnspo31llOlIS :.ire handlEKI Ihrough one applic-'~llofl el Ilexible matching if Iho elemenl of the IransposL'<l pair is option~d, two applic;.tlions if not. 4.5+ Suspe-dodParses h'lteri~.~:;tions are inore colnll~on in spoken than in wl ;ell language but do at:cur if= lyp(~t input sglnOltlnes. To deal wdh such ,1put, out design allows lot blocked patios tO be suspended rtllher than merely discarded. Users. especially novices. =nay embellish their inpul will1 words and phrases that do r',~t provide essential information and cannot be specifically anl,clpalet+ Consider t.vo examines: display please massages dated June 17 disl~ay Ior me messages dated June 17 In the first case. the ml~.rjected word "please" could be recognized as a r:.mnmon noise phrase wI.ch means nothing to the Agent except possibly to suggust that the user is a nowce. The second example is more difficult. Both words of the interjected phrase can appear in a num0er of legitimate and me~lnu'lghJI constru+;h(.a.'~: they cannot be ignored so easily. 102 For the latter example, parse suspension works us follows. After the first word, the active parse contains a single partial parse: (Display HossageDesc r't pt. ion) I I display The next word does not fit this hypothesis, so it is suspended. In its place, a new active parse is constructed. It contains several partial parses including (For Person) and (For Thne[nLerva]) I I t I for rot The next word confirms the first of these, hut the fourth word "messages" does not. When the Darser finds that it cannot extend the active parse, it considers the suspended parse. Since "messages" fits, the active and suspended parses are exchanged anti the remainder of the input processed normally, so that the parser recognizes "display messages dated June 17" as if it had never contained "for me". 5. Conclusion When peDDle use language naturally, they make mistakes and employ economies of expression that. allen result in language which is ungrammalical by strict standards. In particular, such grammatical deviations will inp.vilabty occur in the inpul of a computer syslem which allows its user Io elnploy nalural langua¢.le. Such a computer system must, Ihert~.ior¢:, I}o p,t~l);Lrt~H to I)arsH its input nexibly, if il is avoid Irt=slration for its user. ht this paper, we have attemple'(J Io outline the main kinds of flexibility a nc'ttural I;.tnguage parsur intended for ~att=ral use sltouk| provide. We also describod a bottom-up pattern-matching parser, FloxP, which exhibits these Iloxibilities, and wllicl~ is suitable for restricted natural language input to a limited-domain system. References 1. Ball, J. E. and Hayes, P. J. Representation of Task-Independent Knowledge in a Gracefully Interacting User Interface. Tecll. Rept., Carnegie-Mellon University Computer Science Department, 1980. 2. Bobrow, 0. G., Kaglan, R. M., Kay. M.. Norman 0. A., Thompson, H., and Wintxjrad, T. "GUS: a Frame-Driven Dialogue System." Artificial Intelligence 8 (1977), 155-173. 3. Burton. R. R. Semantic Grammar: An Engineering Technique for Constructing Natural Language Understanding Systems. BBN Report 3453, Bolt, Beranek. and Newman, Inc., December, 1976. 4. Carbonell, J. G. Towards a Self-Extending Parser. Proc. of 17th Annual Meeting of the Assoc. for Comput. Ling., La Jolla, Ca., August, 1979, pp. 3-7. 5. Carbonell, J. G. Subjective Understanding: Com~uter Models ol Be/ielSystems. Ph.D. Th.. Yale University, 1979. 6. DeJong, G. Skimming Storiesin Real-Time. Ph.D. Th., Computer Science Dept., Yale University, 1979. 7. Hayes, P. J.. and Reddy, R. Graceful Interaction in Man-Machine Communication. ProD. Sixth Int. Jr. Conf. on Artificial Intelligence. Tokyo, 1979, pp. 372-374. 8. Hendrix, G. G. Human Engineering for Applied Natural Language Processing. Pro(. Fifth Int. Jr. Conf. on Artificial Intelligence, MIT, 1977, pp. 183-191. , g. Kaplan, S. J. Cool)nrative [~espunses tram a Portable Natural l.lllU!l~!;(: Data B~l:;~ QL,~tV Sy.~t(~m. Ph.D. Th.. Dept. of Computer and Intormalion Science, University of Pennsylv~ulia, Philadelphia. 1079. 10. Kwasny, S. C. and Sondheimer, N. K. Ungrammaticality and ExtraoGrammaticality in Natural Language Understanding Systems. Pro(. of 17Ul Annual Meeting of the Assoc. for Comput. Ling., La Jolla, Ca., August. 1979, pp. 10-23. 11. Parkison. R. C., Colby, K. M.. and Faught. W. S. "Conversational Langua(.io Comprehension Using Inlegraled PzHtern-Matching and Parsing." ..lttthci~d hlt~lliget~c~ ~.) (1077). I 11-134. 1 2. Waltz. D. L. "An English Language Que.~lion Answering System for a Large Relational gala Base." Comm. ACM 2 1.7 (1978). 526-539. 13. Weischedel. R. M. and Black. J. Res~)onding to Polentially Unparseable Senlences. Tecta. Rept. 79/3. Depl. ol Computer and tniormalion Sciences, tJniversity ol Delaware, 1070. 14. Woods, W. A. "Transition Network Grammars for Natural Language Analysis." Comm. ACM 13, 10 (October 1970), 591-606. 15. Woods, W. A.. Kaglan, R. M., and Nash-Webber, B. The Lunar ,°~:ienc,~; t altLiH~,l~t '.';y',~teln: Final Report. Tech. Rept. 2378, Bolt, Beranek, and Newman, Inc., 1972. 103
1980
26
Parsing in the Ahsmmee ofa Comldete Lexicon Jim Davidson and S. Jerrold Kaplan Computer Science Departmen~ Stanford University Stanfor~ CA 94305 I. Introduction It is impractical for natural language parsers which serve as front ends to large or changing databases to maintain a complete in-core lexicon of words and meanings. This note discusses a practical approach to using alternative sources of lexical knowledge by postponing word categorization decisions until the parse is complete, and resolving remaining lexical anthiguities usiug a variety of informatkm available at that time. il. The Problem A natutal language parser working with a database query system (c.g~ PLANES [Waltz et al, 1976], LADDER [Hcndrix, 1977], ROBOT [Harris, 1977], CO-OP [Kaplan, 19791) encounters lexical diflicultics not present in simpler applications. In pprticular, the description of the domain of discourse may be quite large (millions of words), and varies as the underlying database changes. This precludes reliance upon an explicit, fixed ,'exicote-a dictionary which records all the terms known to the system--because of: ta) redundv.cy: Kccpmg the same intbrmation in two places (the lexicon and the database) lcads to problcms of integrity. Updating is more difficult if it must occur simultaneously in two places. (h) size: A database of, say, 30.000 cntries cannot hc duplicated in primary memory. For example, it may hc impractical fi)r a systcm dcaling with a database of ships to store the names of all the ships in a separate it-core Icxicun. If not all allowable Icxical entries are explicitly encoded, |here will be tcrms encountered by the parser about which nnthing is known. The problem is to assign these terms to a particular class, in the absence of a specific lexical entry. Thus. given the scntcnco, "Where is the Fox docked?", the parser would have to decide, in the absence of any prior informatiou about "Fox", that it was the name of a ship, and nuL say, a port. IlL. Previous approaches Th.ere are several methods by which unknown tenns can bc immediately assigned to a category: the parser can chock tire database to scc if the unknown term is there (as iu [Harris, 1977]); the user may be intcractivcly queried (in the style of RFNDEYOUS [Codd ct al.. 1978]); the parser might siutolv make an assumption based on the immcdiat~ context, and proceed (as in [Kaplan, 1979]). (We call these extended-lexicon methods.) However, these methods have the aaso¢iated costs of time, inconvenience, and inaccuracy, and so constitute imperfect solutions. Note in particular that simply using the database itself as a lexicon will not work in the general case. If the database is not fully indexed, the time required to search various fields to identify an unknown lexical item will tend to be prohibitive, if this requires multiple disk accesses. In addition, as noted in [Kaplan, Mays` and Josh[ 1979]. the query may reasonably contain unknown terms that are not in the database ("Is John Smith an employee?" should be answerable even if "John Smith" is not in the database). IV. An Approach--Delay the Decision, then Compare Classification Methods Our approach is to defer any Icxical decision as long as possible, and then to apply the extended-lexicon methods identified above, in order of iucrcasing COSL Specifically, all possible parses are colloctcd` using a semantic grammar (see below), by allowing the unknown term to satisfy any category required to complete the par~e. The result is a list of categnri~ for unknown terms, each of which is syntactically valid as a classification for 'Jln item. Consequcotly, interpretations thar do not result in complete parscs are eliminated. Since a semantic grammar tightly restricts the class of allowable sentences, this technique can substantially rcduce rile complexity of the remaining disambiguation process. The category assignments leading to successful parses are then ordered by a procedure which estimates the cost of chocking them. This ordering currently assumcs an undcrlying cost model in which aec~sing the database on indexcd or hashed ficlds is the least expensive, a single remaining interpretation warrants an assumption of corrccmcss, aud lasdy, remaining ambiguities are resolved by asking the user. A disambigu.',.ted lexical item is added temporarily to the in-core lexicon, so that future qucrics involving that term will not require repetition of the disambiguation process. After the item has not been rcferenccd for some period of time (dctcrmincd empirically) the term is droppcd from the lexicon. Y. Example This approach has been implemented in the parser for the Knowlcdgc llasc Management Systems (KBMS) project tcstbcd` [Wicdcthold, 1978] (11)e KBMS pr,3ject is conccrned wig) the application of artificial intelligence techniques to the design and use of database systems. Among other comoonents, it contains a natural language front end fi)r a CODASYL databa.s¢ in the merchant shipping domain.) The KBMS parser is implementcd using the LIFER package, a semantic grommar based system designed at SRI [Hendrix, 1977]. Semantic grammars have the property that the metasymbols correspond to objects and actions in thc domain, rather than to abstract grammatical concepts. For example, the KBMS parser has classes called SHIPS and PORTS. The KBMS pa~r starts with a moderate-size in-core lexicon (400 words); however, none of the larger database categories (SHIPS. PORTS, SItlPCLASSES. CARGOES) art stored in the in-core lexicon. Following is a tran~ript from a run of the KBMS parser. Thc input to the pa~er is in italics: annotations are in braces. ,.is izmir in italy? {"Italy" is known, from the in-core lexicon, to be a country. "|zmir" is unknown.} ) UNKNOWN TERM IZMIR ) POSSIBLE CATEGORII~: SIIIPS. PORTS. CARGOES {At the point where the word |ZMIR is encountered, any category which admits a name is possible. These include ships, ports, and cargoes.} ) FIN1SIIING PARSE ) POSSIBI.E CATEGORY FOR IZMIR, LEADING TO VALID PARSE: SHIPS. PORTS {When the parse is complete, the category "cargoes" has been eliminated, since it did not lead to a valid parse. So, the remaining two categories are considered.} >" CHECKING SHIPS FILE IN DATABASE ) IZM[R NOT THERE ) ASSUME TI[AT IZMIR IS A PORT. {Of the two remaining categories, SHIPS is indexed in the database by name while PoR'rs is not and would theretbre be very expensive to check. So, the SIIII~. file iS examined first Since |TJVllR is not in the database as a shipname, only PORTS remains. At this point, the parser assumes that IZMIR is a port since this is the only remaining plausible interpretation. This assumption will be presented to d~e user, and will ultiw,=tely be verified in the database query.} 105 ) FINAl. QUERY: > [:u,' the PORTS with PUl'tnall|e etlual tO 'IZMIR'. > is the Portcountry equal to "1"1"? A simple English generation system (written by l'qlrl Saeerdoti). illustrated above, has been used :o provide the user with a simplified natural language paraphrase of the qnery. Thus, invalid assumptions or interpretations ntade by tile parser are easily detected. In a normal run, the inlbmlation about lexical prtx:essing would not bc printed. In the cxanlplc above, the unknown term happencd to consist of a single word. In the gcncral ease. of course, it could be scvcral words long (as is often thc case with the names of ships or pcnple). Items recognized by cxtendcd-lcxicon methods are added to the in-core lexicon, for a period of time. Thc time at which thcy are droppcd from the in-.core lexicon is dctermincd by considcration of the time of last reference, and comp.'~rison of thc (known) cost of recognizing thc items again with the eest in space of keeping them in core. VIii. Applications of this Method The method of delaying a categorization decision until the parse is completed has some possible extensions. At tile time a check is made of the database for classification purposes, it is known which query will be returacd if the lookup is successRil. For simple queries, therefore, it is possible not only to verify the classification of the unknown term. but also to fetch the answer to the query during the check of the database. For examplc, with the query "What cargo is the Fox carrying. ~'. the system could retrieve the answer at the samc time that it verified that thc "Fox" is a ship. Thus, the phases of parsing and qucry-prncessing can be combined. This 'pro-fetching' is possible only because the classification decision has been postponcd undl thc parse is complete. Thc technique of collecting all parses before attempting verification can also provide thc user with information. Since all possible categories for the unknown term have been considered, the user v.ill have a better idea. in the event that the parse cventually fails, whether an additional grammar rulc is needed, an item is missing fiom the databasc, or a lexicon entry has been omitted. VI. Limitations of this Method In its simplest form. this method is restricted to operating with semantic grammars. Specifically. the files in the database must correspond to categories in the grammar. With a syntactic grammar, the method is still applicable, but more complicated; semantic compatibility checks are ne,:essary at various points. Moreover. the set of acceptable sentences is not as tightly constrained as with a semantic grammar, so there is less inlbrmation to be gained from the grammar itself. This method (and all extended-lexicon metht~s) prevents use of an INTI:'RLL~'P.type spelling correcter. Snch a spclling cnrreetor relies on having a complete in-enre lexicon against which to compare words; the thrust of the extended-lexicon methods is the ab~nce of such a lexicon. If the unknown term already has a meaning to the system, which leads to a valid parse, the extended-lexicon methods won't even be invoked. For example, in the KBMS system, the question "Where is the City of Istanbul?" is interpreted as referring to the city, rather than the ship named 'City of Istanbul'. This difficulty is mitigated somewhat by the fact that semantic grammar restricts the number of possible interpretations, so that the number of genuinely ambiguous eases like this is comparatively small. For instance, the query " What is t,. speed of" the City of l~tanbul" would be parsed correctly as refcrrmg to a ship, since 'City of Istanbul" cannot meaningfully refer to the city in this case. V. Conclusion The technique discussed here could be implemented in practically any application that uses a semantie grammar--it does not require any particular parsing strategy or system. In the KBMS tcstbcd, the work was done without any access to the internal mechanisms of I.IFER. The only requirement was the ability to call user supplied functions at appropriate times during the parse, such as would be provided by any comparable parsing system. This method was developed with the assumption that the costs of extended-lexicon operations such as database access, asking the user. etc., are significantly greater than the costs of parsing. T'nus these operations were avoided where possible. Different cost models might result in different, more complex, strategies. Note also that the cost model, by using information in the database catalogue and database schema, can automatically reflect many aspects of the database implementation, thus providing a certain degree of domain-independence. Changes such as implementation of a new index will be picked up by tile cost model, and thus be transparent to the design of the rest of the parser. For natural language systems to provide practical access for database users, they must be capable of handling realistic databases. Such databases arc often quite large, and may be subject to frequent update. Both of these characteristics render impractical the encoding and maintenance of a fixed, in--core lexicon. Existing systems have incorporated a variety of strategies for coping with these problems. This note has described a technique for reducing the number of lexical ambiguities for unknown terms by deferring lexical decisions as long as possible, and using a simple cost model to select an appropriate method for resolving remaining ambiguities. Vl. Acknowledgments This work was performed under ARPA contract #N00039-80-G-0132. The Views and conclusions contained m this document are those of the authors and should not bc interpreted as representative of the official policies, either expressed or implied, of DARPA or the U.S. Government. Thc authors would likc to thank Daniel Sagalowicz. Norman Haas, Gary Hendrix and F.arl Sacerdoti of SRI International for their invaluable assistance and for making thcir programs available to us. Wc would also like to thank Sheldon Finkelstein. Dung Appclt, and Jonathan King for proofreading thc final dralL VI. References [1] Codd, E. F., ¢t at., Rendezvous Version /: An Experimental English- Language Query Formulation System for Casual Users of Relational Data Bases. IBM Research report RJ2144(29407), IBM Research Laboratory, San Jose, CA, 1978. [2] Harris, L., Natural Language Data Base Query: Using the database itself as the definition of world knowledge and as an extension of the dictionary, Technical Rcport 77-2, Mathematics Dept.. Dartmouth Collcge, Hanovcr. NH, 1977 [3] Hcndrix. G.G., The LIFER Manual: A Guide to Building Practical Natural Language Interfaces, Technical Note t38, Artificial Intelligence Center. SRI International, 1977 [41 Kaplan, S. J.. Cooperative Responses from a Portable Natural Language Data Base Query System, Ph.D. dissertation, U. of Pennsylvania, available as HPP-79-19, Computer Science Department, Stanford University. Stanford, CA. 1979 [5] Kaplan. 5. J.. E. Mays. and A. K. Joshi. A Technique for Managing the Lexicon in a Natural Language Interface to a Changing Data Base, Prac. Sixth [nternation_l Joint Conference on Artificial Intelligence. Tokyo, 1979. pp 463-465. [6] Sacerdoti, F.D., Language Access to Distributed Data with Error Recovery, Prec. Fifth International Joint Conference on Artificial Intelligence. Cambridge, MA, 1977, pp 196-202 [7] Waltz, D.I,.. An English Language Question Answering System for a Large Relational Database, Communications of the ACM, 21. 7, July, 1978 [8] Wiedcrhold, Gio. Management of Scmantic Information for Databases, Third USA-Japan Computer Conference Praceedings. San Francisco, 1978. pp 192-197 106
1980
27
On Parsing Strategies and Closure' Kenneth Church MIT Cambridge. MA 02139 This paper proposes a welcome hypothesis: a computationally simple device z is sufficient for processing natural language. Traditionally it has been argued that processing natural language syntax requires very powerful machinery. Many engineers have come to this rather grim conclusion; almost all working parers are actually Turing Machines (TM), For example, Woods believed that a parser should have TM complexity and specifically designed his Augmented Transition Networks (ATNs) to be Turing Equivalent. (1) "It is well known (cf. [Chomsky64]) that the strict context-free grammar model is not an adequate mechanism for characterizing the subtleties of natural languages." [WoodsTO] If the problem is really as hard as it appears, then the only solution is to grin and bear it. Our own position is that parsing acceptable sentences is simpler because there are constraints on human performance that drastically reduce the computational complexity. Although Woods correctly observes that competence models are very complex, this observation may not apply directly to a performance problem such as parsing) The claim is that performance limitations actually reduce parsing complexity. This suggests two interesting questions: (a) How is the performance model constrained so as to reduce its complexit?, and (b) How can the constrained performance model naturally approximate competence idealizations? 1. The FS Hypothesis We assume a severe processing limitation on available short term memory (5TM), as commonly suggested in the psycholinguistic literature ([Frazier79], [Frazier and Fodor?9]. [Cowper76], [Kimball73, 75]). Technically a machine with limited memory is a finite state machine (FSM) which has very good complexity bounds compared to a TM. How does this assumption interact with competence? It is plausible for there to be a rule of competence (call it Ccomplex) which cannot be processed with limited memory. What does this say about the psychological reality of Ccomplex? What does this imply about the FS hypothesis? When discussing certain performance issues (e.g. center- embedding). 4 it will be most useful to view the processor as a FSM; on the other hand, competence phenomena (e.g. subjacency) suggest a more abstract point of view. It will be assumed that there is ultimately a single processing machine with its multiple characterizations (the ideal and the real components). The processor does not literally apply ideal rules of competence for lack of ideal TM resources, but rather, it resorts to more realistic approximations. Exactly where the idealizations call for inordinate resources, we should expect to find empirical discrepancies between competence and performance. A F5 processor is unable to parse complex sentences even though they may be grammatical. We claim these complex sentences are unacceptable. Which constructions are in principle beyond the capabilities of a finite state machine? Chomsky and Bar-Hillel independently showed that (arbitrarily deep) center-embedded structures require unbounded memory [Chomsky59a, b] [Bar-Hillelbl] [Langendoen75]. As predicted, arbitrarily center-embedded sentences are unacceptable, even at relatively shallow depths. (2) ;g[The man [who the boy [who the students recognized] pointed out] is a friend of mine.] (3) ~[The rat [the cat [the dog chased] bit] ate the cheese.] A memory limitation provides a very attractive account of the center-embedding phenomena (in the limit)J 1. I would like to thank Peter Szolovits, Mitch Marcus, Bill Martin, Bob Berwick, Joan Bresnan, Jon Alien, Ramesh Patil, Bill $wartout, Jay Keyser. Ken Wexler, Howard L&,nik, Dave McDonald, Per-Kristian Halvorsen, and countless others for many useful comments, 2. Throughout this work, the complexity notion will be u=md in iu computational sense as a measure of time and space resources required by an optimal processor. The term will not he used in the linguistic sense (the .~ite of the grammar itself). In general, one can trade one off for the other, which leads to conslderable confusion. The site of a program (linguistic compiexhy) is typically inversely related to the power of ttle interpreter (computational complexily). 3. A ha.~i~ mark (~) is used to indicate that a sentence is unacceptable;, an asterisk (=) is used in the traditional fashion to denote ungrammaficality. Grammaticality is associated with competence (post-theoretic), where&,~ acceptability is a matter of performance (empirical). (4) "This fact [that deeply center-embedded sentences are unacceptable], and this alone, follows from the assumption of finiteness of memory (which no one, surely, has ever questioned)." [Chomskybl, pp. 127] What other phenomena follow from a memory limitation? Center-embedding is the most striking example, but it is nor unique. There have been many refutations of FS competence 4. A center-embedded sentence contains an embedded clause surrounded by ]exical material from the higher claus:. [sx [s -] Y]' where both x and y contain lexical material. 5. A complexity argumem of this sort does not distinguish between a depth of three or a depth of four. It would require considerable psychological experimentation to di~over the precise limitations, 107 models: each one illustrates the point: computationally complex structures are unacceptable. Lasnik's noncoreference rule [Lasnik76] is another source of evidence. The rule observes tllat two noun phrases in a particular structural configuration are noncoreferential. (5) The Noncoreference Rule: Given two noun phrases NP 1. NP 2 in a sentence, if NP 1 precedes and commands NP 2 and NP 2 is not a pronoun, then NP1 and NP 2 are noncoreferentiaL It appears t o be impossible to apply Lasnik's rule with only finite memory. The rule becomes harder and harder to enforce as more and more names are mentioned. As the memory requirements grow, the performance model is less and less likely to establish the noncoreferential link. In (6). the co-indeaed noun phrases cannot be coreferential. At the depth increases. the noncoreferential judgments become less and less sharp, even though (6)-(8) are all equally ungrammatical (65 *~Did you hear that John i told the teacher John i threw the first punch. (7) *??Did you hear that John i told the teacher that Bill said John i threw the first punch. (85 *?Did you hear that John i told the teacher that Bill said that Sam thought John i threw the first punch. Ideal rules of competence do not (and should not) specify real processing limitations (e.g. limited memory); these are matters of performance. (65-(8) do not refute Lasnik's rule in any way; they merely point out thal its performance realization has some important empirical differences from Lasnik's idealization. Notice that movement phenomena can cross unbounded distances without degrading acceptability. Compare this with the center-embedding examples previously discussed. We claim that center-embedding demands unbounded resources whereas movement has a bounded cost (in the wont case). 6 It is possible for a machine to process unbounded movement with very limited resources. 7 This shows that movement phenomena (unlike center-embedding) can be implemented in a performance model without approximation. (9) There seems likely to seem likely ... to be a problem. (10) What did Bob say that Bill said that ... John liked? It is a positive result when performance and competence happen to converge, as in the movement case. Convergence enables performance to apply competence rules without approximation. However. there is no logical necessity that performance and 6. The claim is that movement will never consume more than a bounded cost: the cost is independent of the length of the sentence. Some movement .~entences may be ea.'~ier than others (subject vs. object relatives). See (Church80] for more di~ussion. 7. In fact, the human processor may not be optimal The functional argument ob~erve~ that an optimal proce~r could process unbounded movement with bounded resources. This should encourage further investigation, but it alone is not sufficient evidence that the human procesr.or has optimal properties. competence will ultimately converge in every area. The FS hypothesis, if correct, would necessitate compromising many competence idealizations. 2. The Proposed Model: YAP Most psycholinguists believe there is a natural mapping from the complex competence model onto the finite performance world. This hypothesis is intuitively attractive, even though there is no logical reason that it need be the case. s Unfortunately, the ~ychoiinguistic literature does not precisely describe the mapping. We have implemented a parser (YAP) which behaves like a complex competence model on acceptable 9 cases, but fails to pane more difficult unacceptable sentences. This performance model looks very similar to the more complex competence machine on acceptable sentences even though it "happens" to run in severely limited memory. Since it is a minimal augmentation of existing psychological and linguistic work, it will hopefully preserve 1heir accomplishments, and in addition, achieve computational advantages. The basic design of YAP is similar to Marcus' Parsifal [Marcus79], with the additional limitation on memory. His parser, like most stack machine parsers, will occasionally fill the stack with structures it no longer needs, consuming unbounded memory. To achieve the finite memory limitation, it must be guaranteed that this never happens on acceptable structures. That is, there must be a procedure (like a garbage collector) for cleaning out the stack so that acceptable sentences can be parsed without causing a stack overflow. Everything on the stack should be there for a reason; in Marcus' machine it is possible to have something on the stack which cannot be referenced again. Equipped with its garbage collector, YAP runs on a bounded stack even though it is approximating a much more complicated machine (e.g. a PDA). l° The claim is that YAP can parse acceptable sentences with limited memory, although there may be certain unacceptable sentences that will cause YAP to overflow its stack. 3. Marcus' Determinism Hypothesis The memory constraint becomes particularly interesting when it is combined with a control constraint such as Marcus' Detfrminism Hvvothesis [Marcus79]. The Determinism Hypothesis claims that once the processor is committed to a particular path, it is extremely difficult to select an alternative. For example, most readers will misinterpret the underlined portions of (11)-(135 and then have considerable difficulty continuing. ]=or this reason, these unacceptable sentences are often called Qarden Paths (GP). The memory limitation alone fails to predict the unacceptability of (115-(I 3) since GPs don't 8. Chomsky and Lasnik (per~naI communication) have each suggested that the competence model might generate a non-computable ,..eL If this were indeed the c&~e, it would seem unlikely that there could be a mapping onto tile finite performance world. 9. Acceptability is a formal term: see footnote 3. 10. A push down automata (PDA) is a formalization of stack machines. 108 center-embed very deeply. Determinism offers an additional constraint on memory allocation which provides an account for the data. (11) ~T_.~h horse raced past the barn fell. (12) ~John .lifted a hundred pound bags. (1 3) HI told the boy the doR bit Sue would help him. At first we believed the memory constraint alone would subsume Marcus' hypothesis as well as providing an explanation of the center-embedding phenomena. Since all FSMs have a deterministic realization, tl it was originally supposed that the memory limitation guaranteed that the parser is deterministic (or equivalent to one that is). Although the argument is theoretically sound, it is mistaken) ~ The deterministic realization may have many more states than the corresponding non-deterministic FSM. These extra states would enable the machine to parse GPs by delaying the critical decision) 3 In spirit, Marcus' Determinism Hypothesis excludes encoding non-determinism by exploding the state space in this way. This amounts to an exponential reduction in the size of the state space, which is an interesting claim, not subsumed by FS (which only requires the state space to be finite). By assumption, the garbage collection procedure must act "deterministically"; it cannot backup or undo previous decisions. Consequently, the machine will not only reject deeply center-embedded sentences but it will also reject sentences such as (14) where the heuristic garbage collector makes a mistake (takes a garden path). (14) .if:Harold heard [that John told the teacher [that Bill said that Sam thought that Mike threw the first punch] yesterday]. YAP is essentially a stack machine parser like Marcus' Parsifal with the additional bound on stack depth. There will be a garbage collector to remove finished phrases from the stack so the space can be recycled. The garbage collector will have to decide when a phrase is finished (closed). 4. Closure Specifications Assume that the stack depth should be correlated to the depth of center-embedding. It is up to the garbage collector to close phrases and remove them from the stack, so only center-embedded phrases will be left on the stack. The garbage collector could err in either of two directions; it could be overly uthless, cleaning out a node (phrase) which will later turn out to be useful, or it could be overly conservative, allowing its limited memory to be congested with unnecessary information. In either case. the parser will run into trouble, finding the , I. A non-deterministic FSM with n states is equivalent to another deterministic FSM with 2 a states. 12. l am indebted to Ken Wexier for pointing this out. 13. The exploded states encode disjunctive alternatives. Intuitively, GPs mgge.~t that it im't possible to delay the critical decision: the machine has to decide which way to proceed. sentence unacceptable. We have defined the two types of errors below. (15) Premature Closure: The garbage collector prematurely removes phrases that turn out to be necessary. (16) Ineffective Closure: The garbage collector does not remove enough phrases, eventually overflowing the limited memory. There are two garbage collection (closure) procedures mentioned in the psycholinguistic literature: KimbaU's early closure [Kimball73. 75] and Frazier's late closure [Frazier79]. We will argue that Kimball's procedure is too ruthless, closing phrases too soon, whereas Frazier's procedure is too conservative, wasting memory. Admittedly it is easier to criticize than to offer constructive solutions. We will develop some tests for evaluating solutions, and then propose our own somewhat ad hoc compromise which should perform better than either of the two extremes, early closure and late closure, but it will hardly be the final word. The closure puzzle is extremely difficult, but also crucial to understanding the seemingly idiosyncratic parsing behavior that people exhibit. 5. Kimball's Early Closure The bracketed interpretations of (17)-(19) are unacceptable even though they are grammatical. Presumably, the root matrix"* was "closed off" before the final phrase, so that the alternative attachment was never considered. (17) ~:Joe figured [that Susan wanted to take the train to New York] out. (18) HI met [the boy whom Sam took to the park]'s friend. (19) ~The girl i applied for the jobs [that was attractive]i. Closure blocks high attachments in sentences like (17)-(19) by removing the root node from memory long before the last phrase is parsed. For example, it would close the root clause just before that in (21) and who in (22) because the nodes [comp that] and [comp who] are not immediate constituents of the root. And hence, it shouldn't be possible to attach anything directly to the root after that and who. js (20) Kimball's Early Closure: A phrase is closed as soon as possible, i.e., unless the next node parsed is an immediate constituent of that phrase. [Kimball73] (21) [s Tom said is- that Bill had taken the cleaning out ... (22) [s Joe looked the friend is- who had smashed his new car ... up 14. A matrix is roughly equivalent to a phra.,e or a clause. A matrix is a frame wifl~ slots for a mother and several daughters. The root matrix is the highest clause. [5, Kimbali's closure is premature in these examples since it is po~ibie to interpret yesterday attaching high as in: Tom said[that Bill had taken the c/caning out] yesterday. 109 This model inherently assumes that memory is costly and presumably fairly limited. Otherwise. there wouldn't be a motivation for closing off phrases. Although Kimball's strategy strongly supports our own position. it isn't completely correct. The general idea that phrases are unavailable is probably right, but the precise formulation makes an incorrect prediction. If the upper matrix is really closed off, then it shouldn't be possible to attach anything to it. Yet (23)-(24) form a minimal pair where the final constituent attaches low in one case. as Kimball would predict, but high in the other, thus providing a counter-example to Kimball's strategy. (23) I called [the guy who smashed my brand new car up]. (low attachment) (24) I called [the guy who smashed my brand new car] a rotten driver. (high attachment) Kimball would probably not interpret his closure strategy as literally as we have. Unfortunately computer modeh are brutally literal. Although there is considerable content to Kimball's proposal (closing before memory overflow,), the precise formulation has some flaws. We will reformulate the basic notion along with some ideas proposed by Frazier. 6. Frazier's Late Closure Suppose that the upper matrix is not closed off. as Kimball suggested, but rather, temporarily out of view. Imagine that only the lowest matrix is available at any given moment, and that the higher matrices are stacked up. The decision then becomes whether to attach to the current matrix or to c.l.gse it off. making the next higher matrix available. The strategy attaches as low as possible; it will attach high if all the lower attachments are impossible. Kimhall's strategy, on the other hand. prevents higher attachments by closing off the higher matrices as soon as possible. In (23). according to Frazier's late closure, up can attach t~ to the lower matrix, so it does; whereas in (24). a rotten driver cannot attach low. so the lower matrix is closed off. allowing the next higher attachment. Frazier calls this strategy late cto~ure because lower nodes (matrices) are closed as late as possible, after all the lower attachments have been tried. She contrasts her approach with Kimball's early closure, where :~e higher matrices are closed very early, before the lower matrices are done. j7 (25) Late Closure: When possible, attach incoming material into the clause or phrase currently being parsed. Unfortunately. it seems that Frazier's late closure is too conservative, allowing nodes to remain open too long. congesting valuable stack space. Without any form of early closure, right branching structures such as (26) and (27) are a real problem; the machine will eventually flU up with unfinished matrices, unable to close anything because it hasn't reached the bottom right-most clause. Perhaps Kimball's suggestion is premature, but Frazier's is ineffective. Our compromise will augment Frazier's strategy to enable higher clause, to close earlier under marked conditions (which cover the right branching case). (26) This is the dog that chased the cat that ran after the rat that ate the cheese that you left in the trap that Mary bought at the store that ... (27) I consider every candidate likely to be considered capable of being considered somewhat less than honest toward the people who ... Our argument is like all complexity arguments; it coasiden the limiting behavior as the number of clauses increase. Certainly there are numerous other factors which decide borderline cares (3-deep center.embedded clauses for example), some of which Frazier and Fodor have discussed. We have specifically avoided borderline cases because judgments are so difficult and variable; the limiting behavior is much sharper. In these limiting case,, though, there can be no doubt that memory limitations are relevant to parsing strategies. In particular, alternatives cannot explain why there are no acceptable sentences with 20 deep center-embedded clauses. The only reason is that memory is limited; see [Chomsky59a.b]. [Bar-Hillel6l] and [Langendnen75] for the mathematical argument. 7. A Compromise After criticizing early closure for being too early and late closure for being too late. we promised that ~e would provide yet another "improvement". Our suggestion is similar to late closure, except that we allow one case of early closure (the A-over-A early closure principle), to clear out stack space in the right recursive case. I~ The A-over-A early closure principle is similar to Kimball's early closure principle except that it wait, for two nodes, not just one. For example in (28). our principle would close [I that Bill raid $2] just before the that in S 3 whereas Kimball's scheme would close it just before the that in S 2 . 16. Deczding whether a node ca__nq or cannot attach is a difficult question which must be addressed. YAP uses the functional .~tructure [Bre.'man (to appear)] and the phrase structure rules. For now we will have to appeai to the reader's intuitions. |7, Frazier'.s strategy will attach to the lower matrix even when the final particle is required by the higher ciau.,.e &, in: ?! looked the guy who smashed my car ,40. or ?Put the block which is on the box on the tabl¢~ ig. Earl)' closure is similar to a compil" optimization called tail recursion, which converts right recursive exp,'essions into iterative ones, thus optimizing stack u~ge. Compilers would perform the optimization only when the structure is known to be right recursive: the A..over-A clo.,,ure principle is somewhat heuristic since the structure may turn out to be center-embedded. 110 (28) John said [I that Bill said [2 that Sam said [3 that • Jack ... (29) The A-over-A early closure principle: Given two phrases in the same category (noun phrase, verb phrase, clause, etc.), the higher closes when both are eligible for Kimball closure. That is. (1) both nodes are in ~he same category, (2) the next node parsed is not an immediate constituent of either phrase, and (3) the mother and all obligatory daughters have been attached to both nodes. This principle, which is more aggressive th.qn late closure, enables the parser to process unbounded right recursion within a bounded stack by constantly closing off. However, it is not nearly as ruthless as Kimball's early closure, because it waits for two nodes, not just one. which will hopefully alleviate the problems that Frazier observed with Kimball's strategy. There are some questions about the borderline cases where judgments are extremely variable. Although the A-over.A closure principle makes very sharp distinctions, the borderline are often questionable, l~ See [Cowper76] for an amazing collection of subtle judgments that confound every proposal yet made. However, we think that the A-over-A notion is a step in the right direction: it has the desired limiting behavior, although the borderline cases are not yet understood. We are still experimenting with the YAP system, looking for a more complete solution to the closure puzzle. In conclusion, we have argued that a memory limitation is critical to reducing performance model complexity. Although it is difficult to discover the exact memory allocation procedure, it seems that the closure phenomenon offers an interesting set of evidence. There are basically two extreme closure models in the literature. Kimball's early and Frazier's late closure. We have argued for a compromise position: Kimball's position is too restrictive (rejects too many sentences) and Frazier's position is too expensive (requires too much memory for right branching). We have propo~d our own compromise, the A-over-A closure principle, which shares many advantages of both previous proposals without some of the attendant disadvantages. Our principle is not without its own problems; it seems that there is considerable work to be done. By incorporating this compromise, YAP is able to cover a wider range of phenomena :° than Parsifal while adhering to a finite state memory constraint. YAP provides empirical evidence that it is possible to build a FS performance device which approximates a more complicated competence model in the easy acceptable cases, but fails on certain unacceptable constructions such as closure violations and deeply center embedded sentences. In short, a finite state memory limitation simplifies the parsing task. 8. References Bar-Hillel. Perles, M., and Shamir, E., On Formal Properties of Simple Phrase Structure Grammars, reprinted in Readings in Mathematical Psychology, 1961. Chomsky. Three models for the description of language, I.R.E. Transactions on Information Theory. voL IT-2, Proceedings of the symposium on information theory. 1956. Chomsky. On Certain Formal Properties of Grammars, Information and Control, vol 2. pp. 137-167. 1959a. Chomsky, A Arose on Phrase Structure Grammars, Information and Control, vol 2, pp. 393-395, 1959b. Chomsky. On the Notion "Rule of Grammar'; (1961 ), reprinted in J. Fodor and J. Katz. ads., pp 119-136, 19~. Chomsky. A Transformational Approach to Syntax, in Fodor and Katz. eds., 1964. Cowper. Elizabeth A.. Constraints on Sentence Complexity: A Model for Syntactic Processing. PhD Thesis, Brown University, 1976. Church, Kenneth W.. On Memory Limitations in Natural Language Processing. Masters Thesis in progress, 1980. Frazier. Lyn, On Comprehending Sentences: Syntactic Parsing Strategies. PhD Thesis. University of Massachusetts, Indiana University Linguistics Club, 1979. Frazier, Lyn & Fodor. Janet D.. The Sausage machine: A New Two-Stage Parsing Model Cognition. 1979. Kimball. John. Seven Principles of Surface Structure Parsing in Natural Language. Cognition 2:1, pp 15-47, 1973. Kimball. Predictive Analysis and Over-the-Top Parsing, in Syntax arrd Symantics IV, Kimball editor, 1975. Langendoen. Finite-State Parsing of Phrase-Structure Languages and the Status of Readjustment Rules in Grammar, Linguistic Inquiry Volume VI Number 4, Fall 1975. Lasnik. H.. Remarks on Co-reference, Linguistic Analysis. Volume 2. Number 1. 1976. Marcus. Mitchell. A Theory of Syntactic Recognition for Natural Language, MIT Press, 1979. Woods, William, Transition Network Grammars for Natural Language Analysis. CACM. Oct. 1970. 19. [n particular, the A-over-A ear|y closure principle does not account for preferences in sentences like: [ said that you did it yesterday because there are only two clau.~es. Our principle only addresses the limhing cases. We believe there is another related mechanism (like Frazier's Minimal Attachment) to account for the preferred low attachments. See [Church80]. 20. T~e A-over-A principle is useful for thinking about conjunction. 111
1980
28
STPJkTEGX?. SELECTION FOR AN ATN STNTACT~C PARSER Giacomo Ferrari and Oliviero Stock Istituto dl tingulstica Computazionale - C~TR, Plsa Performance evaluation in the field of natural language processing is generally recognised as being extremely complex. There are, so far, no pre-established criteria to deal x~th this problem. I. It is impossible to measure the merits of a grammar, seen as the component of an analyser, in absolute terms. An "ad hoc" grammar, constructed for a limited set of sentences is, without doubt, more efficient in dealing with those particular sentences than a zrammer constructed for a larger set. Therefore, the first rudimentary criterion, when evaluating the relation~hlp between a grammar and a set of sentences, should be to establish whether this grammar is capable of analysing these sentences. This is the determination of linguistic coverage, and necessitates the definition of the linguistic phenomena, independently of the linguistic theory which has been adopted to recognise these phenomena. 2. In addition to its ability to recognise and coherently describe linguistic phenomena, a grammar should be Judged by its capacity to resolve ambiguity, to bypass irrelevant errors in the text being analysed, and so on. This aspect of a grammar could be regarded as its "robustness" [P.Nayes, R.Reddy 1979]. 3. Examining other aspects of the problem, in the analysis chat we propose we will assume a grammar which is capable of dealing with the texts which we will submit to it. Let an ATN grammar tl, vlth n nodes, be of this type. N will be maintained constant for the following discussion. BY text we intend a series of sentences, or of utterances by one of the speakers in a dialogue. When analysing such a text, once a constant N has been assumed, it is likely that, in addition to the content (the arglm~ent of the discourse) indications will appear on the grammatical choices made by the author of the text (or the speaker) when expressing himself on that argument (how the argument is expressed). When these indications have been adequately quantified, they can be used to correctly select the perceptive strategies (as defined in [Kaplan 72]) to be adopted in order to achieve greater efficiency in the analysis of the following part of the text. 4. For our experiments we have used ATNSYS [Stock 76], and an Italian grammar with n - 50 (127 arcs) [Cappelli st at.77]. In this system, search is depth-first and the parser Interacts with a heuristic mechanism which orders the arcs according to a probability evaluation. This probability evaluation is dependent on the path which led to the current node and is also a function of the statistical data accumulated during previous analyses of a "coherent" text. The mechanism can be divided into two stages. The first stage consists of the acquisition of statistical data; i.e, the frequency, for each arc exiting from a node, of the passages across that arc, in relation to the arc of arrival: for each arriving arc there are as many counters as there are exiting arcs. f {e)-*x. f (b}:y ~f{,):w. f(b),* Fig. 1 In this way, in Fig. I arc 1 has been crossed x times coming from a and y times coming from b. In the second stage, during parsing, in state S, if coming from a and w > x, arc 2 is cried first. 113 4.1 Thus, a first evaluation of the linguistic choices made is provided by the set of probability values assocla~ed to each arc. These figures can to some extent describe the "style" of any "coherent" text analysed. (For this one should also take into account the different lin~uistlc significance of each arc. In fact a CAT or PUSH arc directly corresponds to a certain linguistic component, while a JUMP or VIRT arc occurs in relation to the technique by which the network has been built, the linguistic theory adopted, and other variables.) 4.2 The second part of the mechanism, ~he dynamic reordering of the arcs, coincides with a reordering of the co~prehension strategies. In this way, a matrix can be associated to each node, giving the order of the strategies for each arc in arrival. For each text T, there is a set of strategies ~ ordered as describod above. While the analysis of the probability values for distinct texts T and T" can give global indications of their lln~ulstlc characteristics, if we focus on ~he comprehension of the sentence, it is more meaningful to give nvaluatlons in relation to the sets of strategies, ~T and ~ , which are selected. Fig. 2 shows , for some nodes, a comparison between the orders of the arcs for the first Ii sentences from two texts, a science fiction nnvel (SFN, upper boxes) and a handbook of food chemistry (FC, lower boxes). The arc numbers are referred ~o the order in the original network. The figures which appear after the - in the heading indicate the number of parses for each sentence. An ec~cy box indicates the same order as that shown in the previous box. S b/,S2 1~ 1~ 312 ~,~RT 312 312 ;P/Qn 312 312 ~/~36 ~ 1/R~'L1 51324 51342 52134 52134 S/~3~"l 51234 512.14 ;V/R~'I 41 235 45123 41235 43125 G~/Y 41 23 4213 2413 123 Sl 213 213 123 GV/~I~ I 123 123 G~N42 1 123 1123 • t[ ~11 .~4Sl 51342 4123 41 Y~ 4~I,1 52143 5.1 It is to be expected that thls mechanism, in an far as it Intrnduces a heuristics, will increase the efficiency of the system used for the linguistic analysis. The results of our experiments so far confirm this. This ir~roved efficiency can be measured in three ways: a) locally, in terms of the computational load, due to non-determinism, ~ich is saved in each node. In fact, by some experiments, it is possible to quantify the computational load of each type of arc. The computational load of a node is then a linear combination of these values and one can comgare it with the actual load determined by the sequence of arcs attempted in that point after the reordering. b) in terms of an overall reduction in computing time; C) in terms of penetrance, i.e. the ratio between the number of choices which actually lead to a solution and the total number of choices wade. 5.2 If T is a text containing r sentences, the average penetrance will be: o .=..', where ~ stands for each of the sentences in T. If T is analysed using the set of strategies chosen for a different text, T °, then the penetrance is, on average, no greeter than with~ T • 114 In our experiments, for instance, the avera~_e oenetrance for the first text (SFN) parsed with its own strategies (~s##) is ~ed,SFN) = 0.52, while parsed with the strategies of the second text (Sty) is ~(5~,SFN) = 0.39. We have attempted to evaluate experlmentallv the relationship between the difference of the average penetrances, which we call dlscrepanc7 and the distance between two sets of strategies. However we think we need more experimentation before formalizing this relationship. Returning to our science fiction novel, the discrenanc- using its set of strategies and the one inferred by the food chemistry text is 6. In addition to the definition of a heuristic mechanism which is capable of in~rovinE the efficiency of natural language processing, and which can be evaluated as described above, our research aims at providing a means to chsracterise a text by evaluating the ~ramr~atical choices made by the author while expressing his argument. We are also attemptin~ to tako into account the expectations of the listener. In our opinion, the listener's expectations are not limited to the argument of the discourse but are also related to the way in which the argument is expressed; this is the equivalent of the choice of a sdb-grammar [Kittredge 7~] We intend to verify the existence of such expectations not only in literature or x~hen listening to long speeches, but also in dialogue. References I. Cappelll A., Ferrsri G., Horetti L., Prodanof I., S~ock 0.= "An Experimental ATN Parser for Italian Texts" Technical Report. LLC-CNR. Pisa 1977. 2. Kaplsn R.- "Augmented Transition Networks as Psychological t*~dels of Sentence Comprehension" Artificial Intelligence 3 1972. Amsterdam - flew York - Oxford. 3. 8ayes P., Reddy R. - "An anatomy of Graceful Interaction in Spoken and written ~n~chine Communication', C~-CS-79-144, Pittsburgh PA, 1979. 4. Kittredge g.- *Textual Cohesion Within Sublanguage.s: Implications for Automatic Analysis and Synthesis*, COLIN~ 78, ~ergen, 1978. 5. Stevens A., Rumelhart D.- "Errors in ReadlnR:An Analysis Using an Augmented Transition Network Hodel of Grammar" in Horman D., Rumelhart D. eds., Explorations in Cognition, Freeman. S.Francisco, 1975, pp. 136-155. 6. Stock o. - "ATN~YS: Un sisteme per l*analisi grammaticale automatics delle lingue naturali', NI-R76-29, IEI, Pisa, 1976. 115
1980
29
ON THE EXISTENCE OF PRIMITIVE MEANING UNITS Sharon C. Salveter Computer Science Department SUNY Stony Brook Stony Brook, N.Y. 11794 ABSTRACT Knowledge representation schemes are either based on a set of primitives or not. The decision of whether or not to have a primitive-based scheme is crucial since it affects the knowledge that is stored and how that knowledge may be processed. We suggest that a knowledge representation scheme may not initially have primitives, but may evolve into a prlmltive-based scheme by inferring a set of primitive meaning units based on previous experience. We describe a program that infers its own primitive set and discuss how the inferred primitives may affect the organization of existing information and the subsequent incorporation of new information. i. DECIDING HOW TO REPRESENT KNOWLEDGE A crucial decision in the design of a knowledge repre- sentation is whether to base it on primitives. A prim- itive-based scheme postulates a pre-defined set of mean- ing structures, combination rules and procedures. The primitives may combine according to the rules into more complex representational structures, the procedures interpret what those structures mean. A primltive-free scheme, on the other hand, does not build complex struc- tures from standard building blocks; instead, informa- tion is gathered from any available source, such as input and information in previously built meaning structures. A hybrid approach postulates a small set of pro-defined meaning units that may be used if applicable and con- venient, but is not limited to those units. Such a representation scheme is not truly prlmitive-based since the word "primitive" implies a complete set of pre-deflned meaning units that are the onl 7 ones avail- able for construction. However, we will call this hy- brid approach a primitive-based scheme, since it does postulate some pro-defined meaning units that are used in the same manner as primitives. 2. WHAT IS A PRIMITIVE? All representation systems must have primitives of some sort, and we can see different types of primitives at different levels. Some primitives are purely structural and have little inherent associated semantics. That is, the primitives are at such a low level that there are no semantics pre-deflned for the primitives other than how they may combine. We call these primitives struc- tural primitives. On the other hand, semantic primi- tives have both structural and semantic components. The structures are defined on a higher level and come with pre-attached procedures (their semantics) that indicate what they "mean," that is, how they are to be meaningfully processed. What makes primitives semantic is this association of procedures with structures, since the procedures operating on the structures give them meaning. In a primitive-based scheme, we design both a set of structures and their semantics to describe a specific environment. There are two problems with pre-defining primitives. First, the choice of primitives may be structurally inadequate. That is, they may limit what can be repre- sented. For example, if we have a set of rectilinear primitives, it is difficult to represent objects in a sphere world. The second problem may arise even if we have a structurally adequate set of primitives. I_n this case the primitives may be defined on too low a level to be useful. For example, we may define atoms as our primitives and specify how atoms interact as their semantics. Now we may adequately describe a rubber ball structurally, hut we will have great difficulty describ- ing the action of a rolling ball. We would like a set of semantic primitives at a level both structurally and semantically appropriate to the world we are describing. 3. INFERRING AN APPROPRIATE PRIMITIVE SET Schank [1972] has proposed a powerful primitive-based knowledge representation scheme called conceptual dependency. Several natural language understanding programs have been written that use conceptual depend- ency as their underlying method of knowledge represen- tation. These programs are among the most successful at natural language understanding. Although Schank does not claim that his primitives constitute the only possible set, he does claim that some set of primitives is necessary in a general knowledge representation scheme. Our claim is that any advanced, sophisticated or rich memory is likely to be decomposable into primitives, since they seem to be a reasonable and efficient method for storing knowledge. However, this set of after-the- fact primitives need not be pre-defined or innate to a representation scheme; the primitives may be learned and therefore vary depending on early experiences. We really have two problems: inferring from early experiences a set of structural primitives at an appro- priate descriptive level and learning the semantics to associate with these structural primitives. In this paper we shall only address the first problem. Even though we will not address the semantics attachment task, we will describe a method that yields the minimal structural units with which we will want to associate semantics. We feel that since the inferred structural primitives will be appropriate for describing a par- titular environment, they will have appropriate seman- tics and that unlike pro-defined primitives, these learned primitives are guaranteed to be at the appro- priate level for a given descriptive task. Identify- ing the structural primitives is the first step (prob- ably a parallel step) in identifylng semantic primi- tives, which are composed of structural units and associated procedures that 81ve the structures meaning. This thesis developed while investigating learning strategies. Moran [Salveter 1979] is a program that learns frame-like structures that represent verb mean- ings. We chose a simple representative frame-like knowledge representation for Moran to learn. We chose a primitive-free scheme in order not to determine the level of detail at which the world must be described. As Moran learned, its knowledge base, the verb world, evolved from nothing to a rich interconnection of frame structures that represent various senses of different root verbs. When the verb world was "rich enough" (a heuristic decision), Moran detected substructures, which we call building blocks, that were frequently used in the representations of many verb senses across root verb boundaries. These building blocks can be used as after-the-fact primitives. The knowledge representation scheme thus evolves from a primitive- free state to a hybrid state. Importantly, the build- ing blocks are at the level of description appropriate 13 Co how the world was described to Moran. Now Mor~ may reorganize the interconnected frames that make up the verb world with respect co the building blocks. This reorganizaclon renulcs in a uniform identification of the co--alleles and differences of the various meanings of different root: verbs. As l enrning continues the new knowledge incorporated into the verb world will also be scored, as ,-~ch as possible, with respect to the build- ins blocks; when processing subsequent input, Moran first tries to use a on~inatlon of the building blocks to represent the meaning of each new situation iC encoiJ~Cer8 • A sac of building blocks, once inferred, need noc be fixed forever; the search for more building blocks may continue as the knowledge base becomes richer. A different, "better," set of building blocks may be in- ferred later from the richer knowledge and all knowledge reorganized with respect to them. If we can assume that initial inputs are representaClve of future inputs, subsequent processing will approach that of primitive- based systems. 4. AN OVERVIEW OF MORAN Moran is able to "view" a world that is a room; the room Contains people and objects, Moran has pre-defined knowledge of the contents of the room. For exan~le, it knows chac lamps, cables and chairs are all types of furniture, Figaro is a male, Ristin is a female, Eistin and Figaro are human. As input to a learning crlal, Moran is presented with: i) a snapshot of the room Just before an action oct%tEn 2) a snapshot of tbe room Just after the action is completed end 3) a parsed sentence thac describes the action thac occured in the two-snapshot sequence. The learning task is to associate a frame-like structure, called a Conceptual Meaning Structure (CMS), with each root verb it enco,mcers. A CMS is a directed acyclic graph that represents the types of entities chat partic- ipate in an action and the changes the entities undergo during the action. The ~s are organized so thac the similarities among various senses of a given root verb are expllcicly rep- resented b 7 sharing nodes in a graph. A CMS is organ- ized into two par~s: an ar~,-~-cs graph and an effects graph. The arguments graph stores cases and case slot restrictions, the effects graph stores a description of what happens co the entities described in the arg,,m~,~Cs graph when an action "takes place." A sin~llfled example of a possible ~S for the verb "throw" is shown in Figure i. Sense i, composed of argu- ment and effect nodes labelled A, W and X can represent '~kr 7 throws the ball." Ic show thac during sense 1 of the actlan "throw," a human agent remains at a location while a physical object changes location from where the Agent is to another location. The Agent changes from being in a stare of physical contact with the Object co not being in physical contact with ic. Sense 2 is com- posed of nodes labelled A, B, W and Y; It might repre- sent "Figaro throws the ball co E-Istin." Sense 3, com- posed of nodes labelled A, B, C, W, X and Z, could rep- resent "Sharon threw the terminal at Raphael." Mor~- infers a CMS for each root verb it encotmters. Although similarlt~'es among different senses of the same root verb are recognized, similarities are noC recognized across C~S boundaries; true synonyms might have id~-tlcal graphs, but Moran would have no knowledge arguments ~ 1,2,3 .TECT PhysobJ A: Location |C2 Location 2,3 B: ! PREP Prespositi~ I~O~ ~,,m. | c: Ic3 Location J W: X: [ AGENT PHYSCONT OBJECT --> null I effects 1,2,3 I AGENT AT Cl --> AGENT AT C1 I OBJECT AT Cl ~> OBJECT AT C2 Ii,3 ,~ 2 I I~DOBJ AT C2 ---> INDO~ AT C2 Y: AGENT PHYSCONT OBJECT ---> INDOBJ PHYSCONT OBJECT Figure 1. 14 of the similarity. Similarities among verbs that are close in meaning, but not synonyms, are not represented; the fact that "move" and "throw" are related is not ob- vious to Moran. 5. PRELIMINARY RESULTS A primitive meaning unit, or building block, should be useful for describing a large number of different mean- ings. Moran attempts to identify those structures that have been useful descriptors. At a certain point in the learning process, currently arbitrarily chosen by the h.m;un trainer, Moran looks for building blocks that have been used to describe a number of different root verbs. This search for building blocks crosses CMS boundaries and occurs only when memory is rich enough for some global decisions to be made. Moran was presented with twenty senses of four root verbs: move, throw, carry and buy. Moran chose the following effects as building blocks: i) Agent (h,,~--) AT Casel (location) Agent (human) AT Casel (location) * a human agent remains at a location * 2) Agent (human) AT Casel (location) $ Agent (human) AT Case2 (location) * a human agent changes location * 3) Object (physicalobj) AT Casel (location) 1, Object (physicalobj) AT Case2 (location) * a physical object changes location * 4) Agent (human) PHYSICALCONTACT Object (physlcalobJ) Agent (human) PHYSICALCONTACT Object (physicalobJ) * a human agent remains in physical con=at= with a physical object * Since Moran has only been presented with a small number of verbs of movement, it is not surprising that the building blocks it chooses describe Agents and Objects moving about the environmen= and their interaction with each other. A possible criticism is that the chosen building blocks are artifacts of the particular descrlp- tions that were given to Moran. We feel this is an advantage rather than a drawback, since Moran must as- sume that the world is described to it on a level that will be appropriate for subsequent processing. In Schank's conceptual dependency scheme, verbs of move- ment are often described with PTRANS and PROPEL. ~t is interesting that some of the building blocks Moran in- ferred seem to be subparts of the structures of PTRANS and PROPEL. For example, the conceptual dependency for "X throw Z at Y" is: ) Y | D X~--) PROPEL +.S- Z ( J ! (X where X and Y are b,,m"ns and Z is a physical object. see the object, Z, changing from the location of X to that of Y. Thus, the conceptual dependency subpart: We ) <o z <D J appears to be approximated by building block ~3 where the Object changes location. Moran would recoEnize that the location change is from the location of the Agent to the location of the indirect object by the interaction of building block #3 with other buildlng blocks and effects that participate in the action description. Similarly, the conceptual dependency for "X move Z to W" is : z<~)ioc(w) where X and Z have the same restrictions as above and W is a location. Again we see an object changing loca- tion; a co,~-on occuzence in movement and a building block Moran identified. 6. CONCLUDING REMARKS We are currently modifying Moran so that the identified building blocks are used to process subsequent input. That is, as new situations are encountered, Moran will try to describe them as much as possible in terms of the building blocks. It will be interesting to see how these descriptions differ from the ones Moran would have constructed if the building blocks had not been available. We shall also investigate how the existence of the building blocks affects processing time. As a cognitive model, inferred primitives may account for the effects of "bad teaching," that is, an unfor- tunate sequence of examples of a new concept. If ex- amples are so disparate that few building blocks exist, or so unrepresentative that the derived building blocks are useless for future inputs, then the after-the-fact primitives will impede efficient representation. The knowledge organization will not tie together what we have experienced in the past or predict that we will experience in the future. Although the learning pro- gram could infer more useful building blocks at a later timeg that process is expensive, time-consuming and may be unable to replace information lost because of poor building blocks chosen earlier. In general, however, we must assume that our world is described at a level appropriate to how we must process it. If that is the case, then inferring a set of primitives is an advanta- geous strateEy. REFERENCES [Salveter 1979] Inferring conceptual graphs. Co~nltive Science, 1979, 3_, 141-166. [Schank 1972] Conceptual Dependency: a theory of natural language understanding. Cobnitive Psychology, 1972, ~, 552-631. 15
1980
3
PHRAN - A Knowledge-Based Nature] Language Understender Robert Wilensky and Yigal Arena University of California at Berkeley Abstract We have developed an approach to natural language processing in which the natural language processor is viewed as a knowledge-based system whose knowledge is about the meanings of the utterances of its language. The approach is orzented around the phrase rather than the word as the basic unit. We believe that this paradi~ for language processing not only extends the capabilities of other natural language systems, but handles those tasks that previous systems could perform in e more systematic and extensible manner. We have construqted a natural language analysis program called PHRAN (PHRasal ANalyzer) based in this approach. This model has a number of advantages over existing systems, including the ability to understand a wider variety of language utterances, increased processlng speed in some cases, a clear separation of control structure from data structure, a knowledge base that could be shared by a language productxon mechanism, greater ease of extensibility, and the ability to store some useful forms of knowledge that cannot readily be added to other systems. 1.0 INTRODUCTION The problem of constructing a natural language ~rocessing system may be viewed as a problem oz constructing a knowledge-based system. From this orientation, the questions to ask are the following: What sort of knowledge does a system need about a language in order to understand the meaning of an utterance or to produce an utterance in that language? How can this knowledge about one's language best be represented, organized and utilized? Can these tasks be achieved so that the resulting system is easy to add to and modify? Moreover, can the system be made to emulate a human language user? Existing natural language processing systems vary considerably in the kinds of knowledge about language they possess, as well as in how thxs knowledge is represented, organized and utilized. However, most of these systems are based on ideas about language that do not come to grips with the fact that a natural, language processor neeos a great deal of knowledge aoout the meaning of its language's utterances. Part of the problem is that most current natural language systems assume that the meaning of a natural language utterance can be computed as a function of the constituents of the utterance. The basic constituents of utterances are assumed to be words, and all the knowledge the system has about ~he semantics of its language zs stored at the word level (~i~nbaum etal, 1979) (Riesbeck et al, 1975) (Wilks, 197~) (Woods, 1970). However, many natural language utterances have interpretations that cannot be found by examining their components. Idioms, canned phrases, lexical collocations, and structural formulas are instances of large classes of language utterances whose interpretation require knowledge about She entire phrase independent of its individual words (Becker, 19q5) (Mitchell, 19~71). We propose as an alternative a model of language use that comes from viewing language processing systems as knowledge-based systems tha£require the representation and organization of large amounts of knowledge about what the utterances of a language mean. This model has the following properties: I. It has knowledge about the meaning of the words of the language, but in addition, much of the system's knowledge is about the meaning of larger forms of u~terancas. 2. This knowledge is stored in the form of pattern-concept pairs. A pattern is a phrasal cons~ruc~ oI varyxng degrees of specificity. A concept is a notation that represents the meaning of the phrase. Together, this pair associates different forms of utterances with their meanings. 3. The knowledge about language contained in the system is kept separate from the processing strategies that apply this knowledge to the understanding and production tasks. 4. The understanding component matches incoming utterances against known patterns, and then uses the concepts associated with the matched patterns to represent the utterance's meaning. 5. The production component expresses itself b[ lookxng for concepts in the caza oase ~net match the concept it wishes to express. The phrasal patterns associated with these concepts are used to generate the natural language utterance. 6. The data-base of pattern-concept pairs is shared by both the unaerstanding mechanism and the mechanism of language production. 7. Other associations besides meanings may be kept along with a phrase. For example, a description of the contexts in which the phrase is an appropriate way to express its meaning may be stored. A erson or situation strongly associated wi~h the phrase may also be tied to it. PHRAN CPHRasal ANalyzer) is a natural language understanding system based on this view of language use. PNNAN reads English text and produces structures that represent its meaning. As it reads an utterance, PHRAN searches its knowledge base of pattern-conceptpairs for patterns that best interpret the text. The concept portion of these pairs is then used to produce the meaning representation for the utterance. PHRAN has a number of advantages over previous systems: I. The system is able to handle phrasal language units that are awkwardly handled by previous systems but which are found with great frequency in ordinary speech and common natural language texts. 2. It is simpler to add new information to the system because control and representation are kept separate. To extend the system, new pattern-concept pairs are simply added to the data-base. 3. The knowledge base used by PHRAN is declarative, and is in principle sharable by a system for language productioD (Such a mechanism is n~w under construction). Thus adding xnxorma~lon ~o the base should extend the capabz]ities of both mechanisms. 4. Because associations other than meanings can be stored along with phrasal unzts, the identification of a phrase can provide contextual clues not otherwise available to subsequent processing mechanisms. 5. The model seems to more adequately reflect the psychological reality of human language use. 2.0 pHRASAL LANGUAGE CONSTRUCTS By the term "phrasal language constructs" we refer to those language units of which the language user has s~ecific knowledge. We cannot present our entire classification oF these constructs here. However, our phrasal constructs range greatly in flexibility. For example, fixed expressions like "by and large , the Big Apple (meaning N.Y.C.), and lexical collocations such as "eye dro~per" and "weak safety" allow little or no modificatxonA idioms like "kick the bucket" and "bury the hatchet allow the verb in them to s~pear in various forms- discontinuous dependencies like look ... up" permi~ varying positional relationships of their constituents. All these constructs are phrasal in that the language user must know the meaning of the construct as a whole In order to use it correctly. In the most general case, a phrase may express the usage of a word sense. For example, to express one usage of the verb kick, the phrase "<person> <kick-form> <object>" is used. This denotes a person followed by some verb form inyolving kick (e.g., kick, kicked, would ~ave kicked") followe~"~ some utterance ueno~ing an oojec~. Our notion of a phrasal language construct is similar to a structural formula (Fillmore, 1979)- However, our criterion for dlr~trl'F/~ing whether a set of forms should 117 be accomodated by the same phrasal pattern is essentially a conceptual one. Since each phrasal pattern in PHRAN is associated with a concept, if the msenlngs of phrases are different, they should be matched by different patterns. If the surface structure of the phrases is similar and they seem to mean the same thing, %hen they should be accomodated by one pattern. 3.0 PHRAN P~AN (PHRasal ANalyzer) is an English language understanding system which integrates both generative and non-productive language abilities to provide a relatively flexzble and extenstble natural language understanding facility. While PHRAN does have knowledge about individual words, it is not limited to such knowledge, nor ms its processing capability constrained by a word-based bias. Here are some examples of sentences PHRAN can understand: e 0i%men are encouraged by the amount of oil discovered in the Baltimore Canyon, an undersea trough 100 m$1es off the shore of New Jersey. (Newsweek, Feb 1980) * The young man was told to drive quickly over to ~erkeley. * If John gives Bill the big apple then Bill won't be hungry. * Wills will drive Bill to The Big Apple if she is given twenty five dollars. * If Mary brings John we'll go to a Chinese restaurant. * Wills gives me a headache. (The previous sentences are analyzed by an uncompiled version of PHRAN on the DEC-20/4Q system at UC Eerkeley in from 2 to 9 seconds of CPU time). At the center of PHRAN is a knowledge base of phrasal patterns. These include literal strings such as "so's your old man"; patterns such as "<nationality> restaurant", and very ~eneral phrases such as "<person> <give> <person> <object> . Associated with each phrasal pattern is a conceptual template. A conceptual template is a piece of meanln~ representation with possible references to pieces of the associated phrasal pattern. For example, associated with the phrasal pattern "<nationality> restaurant" is the conceptual template denoting a restaurant that serves <nationality> type food; associated with the phrasal pattern "<person~> <give> <personJ> <object>" is the conceptual template that denotes a transfer of possession by <person1> of <object> to <personJ> from <person1>. ~.O HOW PH~AN WORKS ~.1 Overall Algorithm FHRAN is made up of three parts - a database of pattern-concept pairs, a set of comprehension routines, and a routine which suggests appropriate pattern, concept pairs. PHRAN takes as input an English sentence, and as xt reads it from left to right, PHRAN comnares the sentence against patterns from the database. Whenever a matching pattern is found, PHRAN interprets that part of the sentence that matched the pattern as describing the concept associated with the pattern in the pattern-concept pair. 4.1.1 Overview • Of Processing - When PHRAN analyzes a sentence, it reads the words one at a time, from left to right. It does just enough morphological analysis to recognize contractions and "'s s. The pattern suggesting routine determines if any new patterns should be tried, and PHRAN checks all the new patterns to see if they agree with that part of the sentence already analyzed, discarding those that don't. A word's meaning is determined simply by its matching a pattern consisting of that literal word. Then a term is formed with the properties specified in the concept associated with the word, and th:s term is added to a list PHRAN maintains. PHRAN checks if the term it just added ~ the list completes or extends patterns that had alread3 been partially matched by the previous terms. If a pattern is completely matched, the terms matching the* pattern are removed and a new term, specified by th, concept part of the nattern-conceDt pair, is formed and replaces the terms the pattern matched. When PHRAN finishes processing one word it reads the next, iterating thls procedure until it reaches the end 118 of e sentence. At this point, it should end up with a single term on its list. This term con%sins the conceptualization representing the meaning of the whole sentence. 4.1.2 Overview Of PHRAN Patterns - A pattern-concept pair consists of a specification of the phrasal unit, an associated concept, and some additional information about how the two are related. When PHRAN instantiates a concept, it creates an item called a term that includes the concept as well as some additional information. A pattern is a sequence of conditions that must hold true for a sequence of terms. A pattern may specify optional terms toq, the place where these may appear, ana what effect (if any) their appearance will have on the properties of the term formea if the pattern is matched. For example, consider the following informal description of one of the patterns suggested by the mention of the verb 'to eat' in certain contexts. { pa;tern to recognize - |<first term: represents a person> • <second term: is an actlve form of EAT> <OPTIONAL third term: represents food>] term to form - (INGEST(ACTOR <first term>) (OBJECT <third term, if present, else FOOD>)) } Notice that the third term is marked as optional. If it is not present in the text, PHRAN will fill'the OBJECT slot with a default representing generic food. 4.1.. ~ Simple Example - The following is a highly simplified example of how PHRAN processes the sentence "John dropped out of school": First the word "John" is read. "John" matches the patter~ consisting of the literal "John", and the concept associated with this pattern causes a term to be formed that represents a noun phrase and a particular male erson named John. No other patterns were suggested. ~his term is added on to *CONCEPTS, the list of terms PHRAN keeps and which will eventually contain the meaning of the sentence. Thus *CONCEPT* looks like < [JORNI - person, NP] > "Dropped" is read next. It matches the literal "dropped", and an appropriate term is formed. The pattern suggesting routine instructs PHRAN to consider %he 'basic pattern associated with the verb 'to drop', which is: I [<person> <DROP> <object>] [ ... I 1 Its initial condition is found to be satisfied by the first term in *CONCE ~PT e -- this fact is stored under that term so that succeeding ones will be checked to see if this partial match continues. The term that was formed after reading "dropped" is now added to the list. *CONCEPT* is now < [JOMNI - person, NP] , [DROP - verb] > PHRAN now checks to see if the pattern stored under the first term matches the term just added to CONCEPT too, and it does. This new fact is now stored under the last term. Next the word "out" is read. The pattern suggestion mechanism is alerted by the occurence of the verb 'drop' followed by the word 'out', and at this point It instructs PHRAN to consi ;r the pattern I [<person> <DROP> "out" "of" <school> I [ ... ] ! The list in *CONCEPT* is checked against this pattern to see if it matches its first two terms, end since that is the case, this fact is stored under the secord term. A term associated with 'out' is now added to *CONCEPT*: < [JOHNI - person, NP] , [DROP - verb] , lOUT ] > The two patterns that have matched up to DROP are checked to see if the new term extends them. This is true only for the second pattern, a~d this fact is stored unde~ the next term. The pattern l<person> <DROP> <object>) is discarded. Now the word "of" is read. A term is formed and added to *CONCEPT*. The pattern that matched to OUT is extended by OF so %he pattern is moved to ~e next term. The word "high" is read and a term is formed and added to *CONCEPt. Now the pattern under OF is compared against HIGH. It doesn't satisfy the next condition. PHRAN reads "school", and the pattern suggestion routine presents PHRAN with two patterns: I. I [ "high .... school" ] [ represention denoting a school $o~ IOth through 12th graders~ | 2. I [<adjective> ~noun>] [ representation denoting noun modified by adjectiveJ 1 Both patterns are satisfied by the previous term and this fact is stored under it. The new term is added to *CONCEPT*, now: < ~JOHNI - person ~V2 ] ,~[DROP - verb] , [OUT] [0FT , [HIGH - sdjl , [SCHOOL - sch6ol, noun]'> The two patterns are compared against the last term, and both are matched. The last two terms a~'e removed from *CONCEPT*, and the patterns under 0F are checked to determine which of the two possible meanings we have should be chosen. Patterns are suggested such that the more specific ones appear first, so that the more specific interpretation will be chosen if all patterns match equally well.. 0nly if the second meanin~ (i.e. a school that is high) were explicitly specifled by a previous pattern, would it have been chosen. A term is formed and added to *CONCEPT*, which now contains < [JOHNI - person, NP~ . [DROP - verb] [OUT] , [0FI , [HIGH-SCHOOLI - school, NPJ > The pattern under OF is checked against the last term in *CONCEPT ~. PHRAN finds a complete match, so all the matched terms are removed and replaced by the concept associated with this pattern. *CONCEPT* now contains this concept as the final result: < [ ($SCHOOLING (STUDENT JOHNI) . (SCHOOL HIGH-SCHOOLI) (TERMINATION PREMATURE)) ] > 4.2 Pattern-Concept Pairs In More Detail d.2.1 The Pattern - The pattern portion of a pattern-concept pair consists of a sequence of predicates. These may take one of several forms: 1. A word; which will match only a term representing this exact word. 2. A class name (in parentheses); will match any term ~epresenting a member @f this class (e.g. "(FOOD)" or "(PHYSICAL-OBJECT)"). ~. A pair, the first element of which is a property name end the second is a value; will match any ~e rm hav%ng the required valge of the property e.g. "(Part-0f-Speech VERB)"). In addition, we may negate a condition or specify that a conjunction or disjunction of several must hold. The following is one of the patterns which may be suggested by the occurrence of the verb 'give' in an utterance: [(PERSON) (BOOT GIVE) (PERSON) (PNYSOB)I 4.2.1.1 Optional Parts - To indicate the presence of optional terms, a list of pattern concept-pairs is inserted into the pattern at the appropriate place. These pairs have as their first element a sub-pattern that will match the optional terms. The second part describes how the new term to be formed if the maxo pattern is found should be modified to reflect the existence of the optional sub-pattern. The concept corresponding to the optional part of a pattern zs treated in a form slightly different from the way we treat regular concept parts of pattern-concept pairs. As usual, it consists of pairs of expressions. The first of each pair will be places as is at ~he end of the properties o~ the term to be formed, end the second will be evaluated first and then placed on that list. For example, another pattern suggested when 'give' is seen is the following: [(PERSON) (ROOT ~VE).~PHYSOB) (~[T0 (PERSON)) (TO (OPT-VAL 2 CD-FORM))])] The terms of this pattern describe a person, the verb give, and then some pnysical object. The last term describes the optional terms, consisting of the word to followed by a person description. Associated with th~ pattern is a concept part that specifies what to do with the optional part if it is there. Here it specifies that the second term in the optional pattern should fill in the TO slot in the conceptualization associated with the whole pattern. This particular pattern need not be a separate pattern in PHRAN from the one that looks for the verb followed by the recipient followed by the object transferred. We often show patterns without all the alternatives that are possible for expositional purposes. Sometimes it is simpler to write the actual patterns separately, although we attach no theoretical significance to thxs disposition. 4.2.2 The Concept - When a pattern is matched. PHRAN removes the terms that match zt from *CONCEPT* and replaces them with a new term, as defined by the second part of the pattern-concept pair. For example, here is a pattern-concept pazr that may be suggested when the verb "eat' is encountered: ([(PERSON) (BOOT EAT) ([((FOOD)) (FOOD (OPT-VAL I CD-FOBM))])] [P-O-S 'SENTENCE CD-FORM '(INGEST (ACTO~ ?ACTOR) (OBJECT ?FOOD)) ACTOR (VAL~ I CD-FORM) FOOD 'FOOD]) The concept portion of this pair describes a term covering an entire sentence, and whose ~eaning is the action of INGESTing some food (Schank, 1975). The next two descriptors specify how $o fill in vaTiable parts of this action. The expression (VALUE n prop) specifies the 'prop' property of the n'th term in the matched sequence of the pattern (not including optional terms). OFT-VAL does the same thing with regards to a matched optional sub-pattern. Thus the concept description above specifies that the actor of the action is to be the term matching the first condition. The object eaten will be either the default concept food, or, if the optional sub-pattern was found, the term corresponding to this suo-pattern. Sometimes a slot in the conceptualization can be filled by a term in a higher level pattern of which this one is an element. For example, when analyzing "John wanted to eat a cupcake" a slight modification of the previous pattern is used to find the meaning of "to eat a cupcake". Since no subject appears In this form, the higher level pattern specifies where it may find it. That is, a pattern associated with "want" looks like the following: { ~<person> <WANT> <in$initive>] ,infinitive DFOHM This specifies that the subject of the clause following want is the same as the subject of went. 4.5 Pattern Manipulation In More Detail 4.~.I Reading A Word - When s word is read PHRAN compares the ~atterns offered by the pattern suggestin¢ routine with the list *CONCEPT* in ~ne manner aescrioea in ~ne example in section 4.1.3. It discards patterns that confllct with *CONCEPT* and retains the rest. Then FH~AN tries to determine which meaning ?f the word to choose, using the "active" patterns (those that have matched up to the point where PHRAN has read). It checks if there is a particular meaning that will match the next slot in some pattern or if no such definition exists if there is a meanin¢ that might be the beginning of a' sequence of terms -whose meaning, as determined via a pa~tern-concept pair, will satisfy the next slot in one of the active patterns. If this is the case, that meanin~ of the word is chosen. Otherwise PHRAR defaults to the fzrst of the meanings of the word. A new term is formed and if it satisfies the next condition in one of these patterns, the appropriate ~atzsrn Is moved to the pattern-list of the new term. If zhe next condition in the pattern indicates that the term speczfled is optional, %hen PHRAN checks for these Optlonal terms, and if it is convinced that they are not present, it checks to see if the new term satisfies the condition following the optional ones in the pattern. 119 a.3.2 A Pattern Is Matched - When a pattern has been matched completely, PHRAN continues checking all the other patterns on the pattern-list. When it has finished, PHRAN will take the longest pattern that was matched and will consider the concept of its pattern-concept pair to be the meaning of the sequence. If there are several patterns of the same length :hat we re matched PHRAN will group all their meanings together. New patterns are suggested end a disembiguation process follows, exactly as in the case of a new word being read. For example, the words "the big apple", when recognized, will have two possible meanings: one being a large fruit, the other being New York Clty, PHRAN will check the patterns active at that time %0 determine if one of these two meanings satisfies the next condition in one of the patterns. If so, then that meaning will be chosen, Otherwise 'a large fruit' will be the default, as it is the first in the list of possible meanings. 4.~ Adverbs And Adverbial Phrases In certain cases there is need for slightly modified notions of pattern and concept, the most prominent examples being adverbs and adverbial phrases. Such phrases are also recognized through the use of patterns. However, upon recognizing an adverb, PHRAN searches within the active patterns for an action that it can modify. When such an action is found the concept part of the pair associated with the adverb is used to modify the concept of the original action. Adverbs such as "quickly" and "slowly" are currently defined and can be used to modify conceptualizations containing various actions. Thus PHRAN can handle constructs like: John ate slowly. Ouickly, John left the house. John left the house quickly. John slowly ate the apple. John wanted slowly to eat the apple. Some special cases of negation are handled by specific patterns. For example, the negation of the verb want usually is interpreted ss meaning "want not" - " ~ didn't want to go ~o school" means the same thing as "Mary wanted not to go:to school". Thus PHRAN conzains the specifi~ pattern [<person> (do> "not" <want> <inf-phrase>! which Is associated with this interpretation. ~-5 Indexing And Pattern Suggestion Retrieving the phrasal pattern matching a particular utterance from PHRAN's knowledge base is sn important problem that we have not yet solved to our complete satisfaction. We find some consolation in the fact that the problem of indexing a large data base is a neccesary and familiar problem for all Enowledge based systems. We have tried two pattern suggestion mechanisms with PHRAN: I. Keying oatterns off individual words or previously matched patterns. 2. Indexing patterns under ordered seouences of cues go%ten from the sentence a~d phras~T paz~erns recognized in it. The first indexing mechanism works but it requires that any pattern used to recognize a phrasal expressions be suggested by some word in it. This is unacceptable because it will cause the pattern to be suggested whenever the word it is triggered by is mentioned. The difficulties inherent in such an indexing scheme can be appreciated by considering which word in the phrase "by ana large" should be used to trigger it. Any choice we make will cause the pattern ~o be suggested very often in contexts when it is not appropriate. ~nthis form, FHRAN's ~rocessing roughly resembles ELI's (Riesbeck et el, 19V59. We therefore developed the second mechanism. The ~ atterns-concapt pairs of the database are indexed in s ree. As words are read, the pattern suggesting mechanism travels down this tree, choosing branches according to the meanings of the words. It suggests to PHRAN the patterns found at the nodes it has arrived at. The list of nodes is remembered, and when the next word is read the routine continues to branch from them, in addition to starting from the root. In practice, the number of nodes in the list is rather smsll. For example, whenever a noun-phrase is followed by an active form of some verb, the suggesting routine instructs PHRAN to consider the simple declarative forms of the verb. When a noun-phrase is followed by the vero 'to be' followed by the perfective form of some verb, the routine instructs PHRAN to consider the passive uses of the last verb. The phrasal pattern that will recognize the expression "by and large" is found st the node reaches only after seeing those three woras consecutively. In this manner this pattern will be suggested only when neccessary. The main problem with this scheme is that it does not lend itself well to allowing contextual cues to influence the choice of patterns PHRAN should try. This is one area where future research will be concentrates. 5.O COMPARISON TO OTHER SYSTEMS There are a number of other natural lenguage processing systems that either use some notion of patterns or produce meaning structures as output. We contrast PHRAN w~th some of these. An example of a natural language understanding system that produces declarative meaning representations Ss Riesbeck's "conceptual analyzer" (Riesbeck, 1974). Riesbeck's system (and the various systems that have descended from it) works by attaching routines to ind~vidusl words. These routines are generally responsible for building pieces of s meaning reprDsentstion. When a word is reed by the system, the routines associated with that word are used to build up a meaning structure that eventually denotes the messing of the entire utterance. While our sims are much in the spirit of Riesbeck's analyzer, we believe there ere both practical and theoreticsl d~fficulties inherent in his approach. For example, in R~esbeck's conceptual analyzer, specific understanding routines are needed for each word known to the system. Thus extending the system's vocabulary requires the creation and •debugging of new code. In addition, these routines function only in the understanding process. The knowledge they embody is inaccessible to other mechanisms, in particular, to production procedures. Moreover, because Riesbeck's approach is word-oriented, it is difficult to incorporate phrssel structures into his model. Some word of the phrase must have a routine associated w~tb it that checks for that phrase. At best, this implementation is awkward. One of the earliest language understanding systems to incorporate phrasal patterns is Colby's PARRY. PARRY is 8 s~mulation of a paranoid me~tal patient that contains a natural language front and (Psrklnson st al, 19~). It receives a sentence as input and ,na]yzes it in several separate "stages". In effect, PARRY replaces the input wi~h sentences of successively simpler form. In %he simplified sentence PARRY searches for patterns, of which there ere two bssic types: patterns used to interpret the whole ~entence, snd those used on~y to interpret parts of ~t {relative clauses, for example). For PARRY, the purpose of the natural language analyzer is only to translate the input into a simplified form that a model of a paranoid person may use to determine an appropriate response. No attempt Js made to model the analyzer itself after a human language user, as we are doing, nor are claims made to this effect. A system attempting to model human language analysis could not permit several unre]e+ed passes, the use of s transition network grsmmsr to interpret only certain sub-strings in the input, or a rule permitting it to simply ignore parts of the input. This theoretical shortcoming of PARRY - hsving separate grammar rules for the complete sentence ~nd for sub-parts o" it - is shsred by Henarix's LYFER (Hendrix. IO77). LIFER is designed to enable a database to be queried usJn~ 8 subset of the English language. As is t~_ case for PARRY, the natural language ansAysis done by ~Ar~R is not meant to model humans. Rather, its function is to translate the input into instructions and produce s reply as efficiently es possible, and nothing resembling s representation of tne meaning of the input is ever l ormea, u: course the purpose of LIFE~ is not to be th ~ front end of a system that understands coherent texts and which must therefore perform subsequent inference processes. Wh~le LIFER provides s workable solution to the natural language problem in a limited context I msny general problems of language analysis are not adoresseo in that context. SOPHYE (Burton, 1976) was designed to assist students in learning about simple electronic circuits. It can conduct a dialogue with the user in a restricted subset of the English language, and it uses knowledge about patterns of speech to interpret the input. SOPHIE accepts only certain questions and instructions concerning a few tasks. As is the case with LI-FER. the langusge utterances acceptable to the system are 120 restricted to such an extent that many natural language processing problems need not be deelt with and other problems have solutions appropriate only to this context. In addition, SOPHIE does not produce any representation of the meanin~ of the input, and it makes more than one pass on the Input i~morlng unknown words, practices that nave already been crlticized. The augmented finite state transition network (ATN) has been used by a number of researchers to aid in the analysis of natural language sentences (for example, see Woods 1970). However, most systems that use ATN's incorporate one feature which we find objectioneble on both theoretical and practical grounds. This is the separation of analysis into syntactic and semantic phases. The efficacy and psychological validity of the separation of syntactic and sementicprocessing has been argued at lengthelsewhere (see Schar~ 1975 for example). In addition, most ATN based systems (for .xample Woods' LUNAR program) do not produce represents%ions, but rather, run queries of a data base. In contrast to the systems just described, Wilks' English-French machine ~ranslstor do~s not share several of their shortcomings (Wilks, 197~). It produces a representation of the meaning of an utterance, and it attempts to deal with unrestricted natural language. The maxn difference between Wilk's system and system we describe is that Wilks' patterns are matched against concepts mentioned in a sentence. To recognize these concepts he attaches representations to words in e dictionary. The problem is that this presupposes that there is a simple correspondence between %he form of a concept and the form of a language utterance. However, it is the fact that this correspondence is not simple that leads to the difficulties we are addressing in our work. In fact, since the correspondence of words to meanings is complex, it would appear ~hat a program like Wilks' translator will even~ually need %he kind of knowledge embodied in PHRAN to complete its analysis. One recent attempt at natural language analysis that radically departs f~om pattern-based approaches is Rieger ' and Small's system (Smell, 1978). This system uses word experts rather than patterns as its basic mechsnxsm. ~nelr system acknowledges the enormity of the knowledge base required for language understanding, and proposes s way of addressing the relevant issues. However, the idea of puttin~ as much information as possible under individual words is about as far from our -conception of language analysis as one can get, and we would argue, would exemplify all the problems we have described in word-based systems. References Becket, Joseph D. (1975). The phrssel lexicon. In Theoretical Issues in Natural Language Processing. R. Scnenk ano B.L. ~a~T~-We~oer ~eds.~. Camorluge, Mass. Birnbaum, L. and Selfridge, M. (1979). Problems in conceptual analysis of natural lenguage. Yale Un versity Department of Computer Science Research Report I~8. Burton., Richard R. (1976). Semantic Grammar: An Engineering Technique for Constructing Natural Language Understanding Systems. BaN Report No. 3a53, Dec 1976. Fillmore, C.J. (1979). Innocence: A Second Idealization for Linguistics. In Proceedings of the Fifth Berkeley Language Symposium, Ber~eiey, c~/l-iTE~nia. Hendrix, Gary G. (197"). ~"%e Lifer Menus]: A Guide to Building Practical Netursl Language Interfaces. SRY !nterns~ionel: AI Center Tachnicel Note 138, Feb 1977. Mitchell, T. F. (1971). Linguistic "Goings On"; Collocations and Other Matters Arising on th~ Syntactic Record. Arch~vum Linguisticum 2 (new series 3~-69. IQ7 P~rkinson, R.C., Colby, K.M., and Faught, W.S. ( . ~ • Conversational Language Comprehension Using Integrated Pattern-Matching and Parsing. Artificial Inte]ll~ence 9, 111-134. Riesbeck, C. K. (1975). Conceptual anelysis. In R. C. Sohenk Conceptual Informetion Processing. American Elsevier ~uoAlsoing uompany, ~nc,, Sew York. R~esbeck C. K. and Schank, R. C. (1975). Comprehension by computer: expectation-based analysis of sentences in context. Yale University Resesrch Report 78. Schank. R. C. (1975). Conceptual Information Processing. American Elsevler ~uollsnlng 5ompeny, Inc., Row lOr~. Small, S. (1978). Concegtuel language analysis for story comprehension. Technical Repor~ No. 565, Dept. of Computer Science, University of Maryland, College Park, Maryland. Wilks, Yorick (1973). An AI Approach to Machine Translation. In Computer Models of Thought and Language, R.C. Schsnk and K.~. uoioy L eds.-'T, w.H. :foeman and Co., San Francisco, 1973. Woods, W. A. (1970). Transition Network Grommets for Natural Language Anelysis. CACM 13, 591-606. 121
1980
30
ATN ~AM~AR HDDELI!~G ]17 APPLIED LII~UISIqCS ABSTRACT: Au~mentad TrarmitiOn Network grm.n~rs have significant areas of ~mexplored application as a simula- tion tool for grammar designers. The intent of this pa- per is to discuss some current efforts in developing a gr=m.~ testing tool for the specialist in linguistics. ~e scope of the system trader discussion isto display structures based on the modeled grarmar. Full language definition with facilitation of semantic interpretation is not within the scope of the systems described in this paper. Application of granrar testing to an applied linguistics research envi~t is enphasized. Exten- sions to the teaching of linguistics principles and to refinemmt of the primitive All{ f%mctions are also con- sidered. i. Using ~t~od¢ 5bdels in Experimental Gr=r-~r Design Application of the A~q to general granmar modeling for simulation and comparative purposes was first sug- gested by ~,bods(1). ibtivating factors for using the net:,,~ork model as an applied gra, mar design tool ere: I. T. P. KEHLE~. Department of .~the=mtius and Physics Texas Woman's University R. C. ~.DODS Department of Co~,~ter Science Virginia Technological University syntactic as well as s~tic level of analysis. The ATN is proposed as a tool for assistin~ the linguist to develop systsmatic descriptions of ~ e data. It is assumed that the typical user will interface with the system at a point where an AEN and lexicon have bem~ developed. The ATN is developed from the theoretical model chosen by the linguist. Once the ~ is imp lememtad as a cooputational pro- cedure, the user enters test data, displays structures, the lexicon, and edits the grammr to produce a refined A~] grarmar description. The displayed struc- tures provide a labeled structural inremyretation of the input string based on the lin=~uistic model used. Trac- ing'of the parse may be used to follow the process of building the structural interpretation. Computational implemm~tation requires giving attention to the details of the interrelationships of gr~.matical rules and the interaction between the grammar rule system and the lex- ical representation. Testing the grammr against data forces a level of systemization that is significantly more rigorous than discussion oriented evaluation of gra~er sys ~m,. The model provides a meens of organizing strut- rural descriptions at any level, from surface syntax to deep propositional inrerpreta=icms. 2. A nemmrk m~el may be used Co re~resent differ- ent theoretical approaches Co grammr definition. The graphical representation of a gramrar permit- ted by the neuaork model is a relati~ly clear and precise way to express notions about struc- t~/re. 3. Computational simulation of the gramsr enables systematic tracing of subc~xx~nts and testing against text data. 4. Grimes (2), in a series of linguistics workshops, d ~ strafed the utility of the network model ~ in envi- ~u~nts wh~e computational testir~ of grammrs was r~t possible. Grimes, along with other c~ntributors to the referenced work, illustrated the flexibility of the ATN in talc analysis of gr~ratical structures. A~ implerentations have nmsCly focused on effective natural language understanding systems, assuming a computation- ally sophisticated research envir~t. Inplementatiorm are ofte~ in an envirormm~t which requires some in- depth ~mderstanding and support of LISP systems. Re- cently much of the infornmtion on the ATN formalism, applications and techniques for impler~ntation was sum- marized by Bates (3). Tnc~h ~amy systems have be~ developed, little attention has been giv~ to =eating an interactive grarmar modeling system for an individual with highly developed linguistics skills but poorly de- veloped c~putational skills. The individual involved in field Lir~=%~istics is concerned with developing concise workable descriptions of some corpus of deta in a ~ven language. Perti~,7~ problems in developing rules for incerpreting surface s~-uctn~res are proposed and discussed in relation to the da~a. In field lir~tics applications, this in- wives developing a rmxor~my of structural types follow- ed by hypothesizing onderlying rule systems which pro- vide the highest level of data integration at a 2. Desi=~ Consideratiors The gm~ral dasi~ goal for the grammr rasing sys~ described here is to provide a tool for develop- ing experimentally drive~, systematic representation models of language data. Engineering of a full Lmguage ~erstamdimg system is not the ~f~mm-y focus of the efforts described in this paper. Ideally, one would Like Co provide a tool which would attract applied lin- guists to use such a syst~n as a simulation environmen= for model developmen=. design goals for the systems described are: i. Ease of use for both novice and expert modes of .operation, 2. Perspi cuity of gr~m~r representation, 3. Support for a variety of linguistic theories, 4. Trarmportability to a variety of systems. The p~totype grammr design sys~ consists of a gram~r gemerator, a~ editor, and a monitor. The f~mc- tion of U%e gr;~.~ editor is to provide a means of defining and mm%iv~lating gr~mar descriptions w~thouc requiring the user to work in a specific programing langu~e env~uL~,=L~. ~e editor is also used to edic lexicons. The editor knows shout the b/N envirormen~ and can provide assistsmce to the user as needed. The monitor's function is co handle input and out- puc of gr~-~ and lexicon files, manage displays and traces of parsir~s, provide o~sultation on the sysran use as needed, and enable the user to cycle from editor to parsing with mi~m,~ effort. The monitor can also be used to provide facilities for studying gram~r effi- ciemcy. Transportability of the gr~mn~" modeling systsm is established by a progran generator whi~,h enables im- pl~tation in differanc progr~m~ng ~ e s . 3. Two In Dlemmutatiors of Grit Tes~ Sysr~-s To deu~lop some understanding on the design amd impleremrmtion requirements for a sysr~n as spec- ified in the previous section, D~o experimenr.al gr~'-~" resting systems have been developed. A partial A~ im- pl~m~nta=ion was dune by ~_hler(A) in a system (SNOPAR) ~dnich provided some interactive gr.~Tr~T and development facilities. SNOPAR imcorporated several of the basic features of a grammr generator and monitor, with a limited editor, a gra-m=~ gererator and a number of other fea=uras. Both SNOPAR and ADEPT are implemenred in SNO~OL and both have been ~:rarmpcrr~ed across opera.rig sysrems (i.e. TOPS-20 co I~M's ~;). For implemm~retion of rex= ediCir~ and program grin,mar gemerar.ion, the S~OBOL& language is reasonable. However, the Lack of ccmprehen- sive list storage marm@snentis a l~n~tatio~ on the ex- tension of ~ implerenre=ion ~o a full natural lan- guage ~mdersr~ sysr~n. Originally, S}~DBOL was used because a suirmble ~ was noC available to the i~plem~r. 3.1 SNOPAR SNOPAR prov£des =he following ftmctions: gr~m~.r creation and ecLiting, lexicon oreation end echoing, ex- ecution (with some error trapping), Cracing/~t~g2x~ and file handling, lhe grammar creatiun porticm has as am option use of an inrerac=ive grit Co creare an ATN. One of the goals in =he design of ~.~3PAR was to in~'c~,~ce a notation which was easier to read than the LISP reprasemta=ion most frequently used. Two basic formats have been used for wri~ng grab- mars in ~qOPA.~. One separates dm conrex~c-free syntax type operations f-con the rests and actions of the gram- mar. This action block fo=ma~ is of the following gem- era]. for=: arc- type-block s tare arc- type arc-type :S ('i'D (test-action-block)) : S CID (=es t-action-b lock) ) :F~{) where arc-type is a CAT, P~RSE or FIN~.~RD e~c., and the test-action-block appears as folluws: =es C- action-b lock sr~re arc-reSt: I action :S(TO(arc-type-bl6d<)) arc-rest ! action :S(TO(arc-rype-block)) where an arc-test is a CC~PAR or other test and an action is a ~ or HUILDS type action. Note that m'~ additional intermediare stare is in=roduaed for the test and ac=iuns of the AXN. 'lhe more sr~ Jard formic used is ~ve~ as: state-÷ arc-type -~7 con/ition-rest-and-ac=ion-block --7 ne~- stace An exa~le nmm phrase is given as: NP CAT('DET') SETR('NP', 'DET' ,Q) :SCID('ADJ')) CAT('NPR') sEm('t~', '~'R' ,Q) : S CID ( ' POl~ ' ) )F (FRETURN) ADJ CAT('ADJ') S~R('t~','ADJ',Q) :S(TO('Am')) CAT('N') S~TR('I~' ,'N' ,q) :S(TO('N'))F~) NPP PARSE(PPO) SEI'R('NP', 'NPP' ,Q):S(TO('['~P')) POPNP NP = BUILDS (NP) : (P.E!'URN) The Parse function calls subneu~rks which consist of Parse, C, ac or other arc-types. Structures are initial- ly built through use of the SETR function which uses the top level consti,;:um",c ~ (e.g. NP) rm form a List of the curmti~um~ts referenced by the r~g~j-rer ~ in ~-~x. All registers are =reared as stacks. ~he ~UILDS function may use the implici= r~d'~rer ham sequence as a default to build ~he named structure. ~he 'cop level constitn~nc ~ (i.e. NP) cunr2dms a List of the regis- rers set during the parse which becomes the default list for struuture building. ~ere are global stacks for history m~ng and bank up. functions. Typically, for other ~um the ~=1 creation of a gr~r by a r~ user, the A~q func~ library of system is used in conjunction wi~h a system editor for gr~.=.~ development. Several A~q gr~n-s have beem wri=r~n with this system. 3.2 ADEPt S ~, an effort co make am e~sy-to-use s~r~d~on tool for lir~u£s~, the basic concepts of SNOPAR were exrer~- ed by Woods (5) co a full A~N implememtacion in a sys~ called ADEPT. ADEPT is a sysr.em for ger~ratimg A~I~ pro- gram through ~he use of a rmU~rk edir.=r, lexicon ec~tor,error correction and detection _~n%-~z.~:, and a monitor for execution of the griT. Figure I shnws the sysr.~n organizarlon of ADEPT. 'Ihe edict in ADEPT p~ov-ides the foll~ fu~c=ions : - net~:k creati~" - arc deletion or edi~ - arc ins~on - arc reorderir~ - sraEe insertion and deletiun A.~ Files ----> A~: Progr~ ~ a r ~ y r ATN Functions < ~e four main editor commnd types are m.-~ized belch: Z <net> z <s==~> .<~ta=-> # tar.~ D zota~), ~ta~ I <s=a~ L <film~me> Edits a neu~n%k (Creates i= if it doesn'~ exist) =~iit arc information Deletes a nem~r:k Deletes a stare Delete an arc Insert a srmre Insert an arc Order arcs from a stare LLsc nev~orks Star.e, r~twork, arid arc ec~i~Lr~ are dlst/_n=oz~shed by conrex= and the ar~-.~nrs of ~he E, D, or I c~m~nds. For a previously undefined E net causes definition of ~m ne=#ork. ~e user must specify all states in the rmt~x)rk before staruir~. ~l~e editor processes the srmre list requesting arc relations and arc infor-mcion such as the tests or arc actions. ~he states ere used ro help d~m~ose e~-~uL~ caused by misspelling ~f a srm~e or omission of a sta~e. Once uhe ~=~rk is defined, arcs ~ay by edired by specifying =he origin and dest/na=ion of the arc. ~e arc infor~mcion is presemr~d in =he following order: arc destination, arc type, arc test and arc actions. Each of 124 dlese items is displayed, permit~ir~ rile user to change values on the arc list by ~yping in the needed infor=m- tion. t~itiple arcs between states are differentiated by specifying the order nu~er of the arc or by dis- playing all arcs to the user and requesting selection of the desired arc. N~ arcs are inserted in the network by U~e I mand. -vhenever an arc insert is performed all arcs from the state are nurbered and displayed. After the user specifies the nu~er of the arc that the n~ arc is to follow, the arc information is entered. Arcs nay be reordered by specifying the starting state for the arcs of inCerast using the 0 command. ~e user is then requested ~o specify the r~ ordering of ~Se arcs. Insertion and deletion of a state requires that the editor determine the sta~as which r.'my be reached the new state as well as finding which arcs terminate on the n~4 state. Once this information has been establish- ed, the arc information may be entered. ~nen a state is deleted, all arcs which inmediately leave the state or which enter the state fr~n other stares are removed. Error ¢onditioos exist~ in the network as a result of the deletion are then reported. The user then ei~er verifies the requested deletion and corrects any errors or cancels the request. Grarmar files are stored in a list format. ~he PUT cou-n,ar.d causes all networP.s currently defined to be writ- ten out to a file. GET will read in and define a grammar. If the net~..~ork is already defined, the network is r~:~: read in. By placing a series of checking functions in an A~N editor, it is possible to fil~er out many potential errors before a grammr is rested. ~he user is able to focus on the grammr model and not on the specific pro- gra~ming requir~r~nts. A monitor progra~ provides a top level interface to the user once a grammar is defined for parsing sentances. In addition, the monitor program manages the stacks as well as the S~qD, LIFT and HOLD lists for the network gr~m~sr. 9wi~ches may be set to control the tracing of the parse. An additional feature of the ~.bods ADF.Yr syst~n is the use of easy to read displays for the lexicon and gra'iIr~. An exar~le arC is shown: (~)--CAT('DET')-- (A_nJ) • ~qO TESI'S. ~ ACTICNS SErR('DEr' ) ADEPT ~has be~ used to develop a small gr=~,~r of English. Future exp~ts ere planned for using ADEPT in an linguistics applications oriented m~iron- n~nt. 4. Experiments in Grammar ~deling Utilization of the A~N as a grammr definition syst~n in linguistics and language education is still aC an early stage of development. Ueischedel et.al. (6) [~ve developed an A~-based system as an intelligent CAI too for teaching foreign language. ':~[~in the ~OPAR system, experiments in modeling English transfor- mational grammar exercises and modeling field linguis- tics exercises have been carried out. In field I/~- tics research some grarmar develqgment ~has bean dune. Of interest here is the systenatic forrazl~tion of rule system associated with the syntax and semantics of ICL SU POPICL VP VMDD POPVP NP NI~DD POPNP El'© thus permitting the parse of kokoi) as: (ICL ~red ~))) (Subj natural language subsysr~,s. Proposed model gr~,,ars can be evaluated for efficiency of representation and exzend- ibilit7 to a larger corpus of data. Essential Co this approad% is the existence of a self-contained easy-Co-use transportable AII~ modeling systems. In the following sections some example applications of gr~m~r r~sting co field lir~=uistics exercises and application to modeling a language indigerJoos to the Philippines ~ given. 4. I An Exercise Ccmputaticrmlly Assisted Tax~ Typical exercises in a first course in field lin- guistics give the student a series of phrases or senten- ces in a language not: known to the student. T ~ c analysis of the data is to be done producing a set of formul~q for constituent types and the hierarch~a] relationship of ourmtituenCs. In this partic,1]nr case a r~-~nic analysis is dune. Consider the following three sentences selected from Apinaye exercise (Problem I00) (7) : kukrem kokoi the nr~<ey eats kukren kokoi rach the big mor~e-/ eats ape rach mih mech the good man woz~s well First a simple lexicon is contructed, from this and other data. Secondly, immediate constituent analysis is car- tied out to yield the following tegms~ic fommdae: ICL := Pred:VP + Subj :t~ NP := F~d:N + [~od:AD VP := Head:V + Vmod:AD lhe AIN is then defined as a simple syntactic orgsniza- Clon of constituent types. ~e ~0P~R representation of this grarmar would be: PARSE(VPO) SEIR('ICL', 'Pred' ,Q) :S(TO('SU'))F~) PA~E~()) SEm('ZCL' ,'Subj',OJ : S CID ( ' POPICL ' ) ) F (FREIU~N) zcL = EUILDS(ICL) : (.~nmN) CAT('V') SETR('VP', 'Head' ,Q) : S(TO( 'VMDD' ) ) F (FREI'J~N) CAT('AD') SEIR('VP', 'V~bd' ,Q) VP = Nf/I~(VP) : ¢~) CAT('N') szm('NP', 'Head' ,0) : S CID ( L~DD ' ) ) F CFREIIR~N) CAT('AD') SELR('NP', '~d' ,Q) NP ~ mTII~(NP) : (RETU~) the first senrance (Kukren c English gloss may be used as in the following exa~le: GLOSS : WORK ~ MAN WELL/G00D The good man works a lot. STATE.: ICL INPUt: (ICL (?red Cqe_~a APE ¢ee~ RA~O)) (Subj ~e~d MIH) sentence in the exercise may be entered, making 125 correc=ions to the ~ as _needed___. Once the basic notions of syntax and hierarchy are established, the model may th~n be extended to incorporate conrax=- semsiti~ and semantic features. Frequenr.ly, in p~upos- ing a tam00rmmy for a series of smrancas, ore is t~mpted to propose r~mermas s~s~ctural V/pes in order to handle all of =he deta. The orian=a~.on of grw~- tes~_ng encourages =he user to look for more concise represemra- =ions. Tracing the semrance parse cm~ yield infor~1::i~ abou= the efficiemcy of the represmrmtion. Tra~ is also illus=rative to the s~t, permit=~,ng many ,~rs- to be chserved. 4.2 Cotabato Mar~bo An ATN represmtation of a gr~-~ for Cotabaco ~.~'~l:)o was done by Errington(S) using the manual ~cuuos- ed by Gr~-,~ (2). Rector/y, the gr~:-=~- was implemmred and tasted using ~OPAR. The implen~m~ation cook place over a ~u'ee month period with ir/~ imp~,,tation at word leuel and ewencual ex-cemsion to ~he cqm~e level with conjm~ctions and mbedding. ~ t s were used ~Irou~hout the ~rmwr~m to explain the rational for particular arc types, Cases or actions. A wide variety of clause L'ypas are handled by L-he g-c~m~-. A specific requirement in the ,'mr~bo graz=ar ~s =he ability to handle a significan~ ammm~ of test:- ing on the arcs. For ~le, it is not u~w,~-m-n to ha~ three or four arcs of the sa~e L-ype differentiated by checks on re~isrars f~ previous points in =he oarse. Wi~ nine network types, this leads to a cormid~rable ammmt of H-~ being spent in conrax~ =bedS. A s=raight forward a~proach to the gr~m~- design leads to a considerable amoum~ of back~ up. in the parse. '~hile a high speed parse was not am objective of the dasi~, it did point out the difficulty in designing ~'.~..-rs of significan= size without ge=tirg in to progr~w~ practice and applying more efficisn= parsing routines. Since an objective of the project is to provide a sys- tem which emphasizes me ~ t i c s and not: progrm~mg practice, it was necessary to maintain descriptive clari=y at the sacrifice of performanca. An exmple parse for a clause is glum: #,AEN SA E~.AW SA 8r--GAS -- Tae person is eatiz'g rice GLOSS: EAT THE PL-'RSON.PEOPLE THE .RICE STATE: CL r;qPUT: (CL ~P ~B (V~ (VAFF EG) at=ion is 'eat' (V~S ..~RES) (~D BASIC) (VFOC ACTORF) Crn?El ~qS) 0z3rnz i~))) 0n~rf~E v~))) (FOC focus is 'the people' ~P ~ET SA) ~C ~C (ACIDR actor is 'the people' (~ (DST SA) (~C (NPNUC CL~ ~-7~q) )) )) (NGNACr objec: is 'rice' em (DEr SA) (NUC ~12C (~ ~s)))))) 5. Sumaazy am6 Conclusior~ Devel~xment of a relatively easy to use, tr~mspof =able grammar desi=~ system can make ~:~ssible the use of gr~-.=~ =z~el/rg in d~e applied Ltnguistics envirormmt, in education and in ~tics research. A first step in ~ effort .has been carried out by img!~_ng -.~-mrml sysram ,SNOP~.R ar~ ADK=r, which ~,gnasise norm=ional cleriry and am e4itor/mnitor interface to the user. The re=,,,ozk editor is designed to ~rovide error b.amdl-~ng, cor:ec~:ion and interaction wik'.-,, the user in asr~blis,hirg a nam~":k model of the gr~,,~-. S~ a~plications of ~qDP&R l~ve been -=~ to resting r~m~=mically based g r ~ . Future use of ADEPT in the ]/r~sCics e~,ea~.ion/reseaz~h is p ~ . 'D~veloping a user-orimrad A~N modeling sveram for ",_~m~-%~.s=s provides certain insights to the AXI~ model itself. Su~q u ~ as use perspicuity of r/he ATN red, rest.ration of a g r ~ and the ATN model .avplica- bi~/ to a varie~, of language .is!Des cam. be eva!uered. In addition, a more widespread application of A~Ns can lead Co some scanderdiza~ion in gr~m,~- =mdelirg. The relaraed issue of develooing interfaces for user extm~ion of gram-mrs in natural language pro~sing sysr~rs car, be investigated fr~n incressed use of ~'ne A~ model by the person who is not a spee~]~t in arci- final inre!ligm%~.e. The systems gm-eral design does not 1~-~t itself Do azADlication rm the A~q model. 6. i. 2. 3. 4. 5. 6. 7. 8. RP-ferec%ces 5hods, W., Transi=ion ~etwork Gr~s for Natural LatlSuage Analysis, ~cations of the ACH, ~i. 13, no. i0, 1970. Gz~m~, J., Trm%si=ion Network Grammars, A Guide, ~twork Grasmars, Grimes, J., ed., 1975. Bares, lMdelein, The Theory and Practice of A,~gm~t- ed Trm%sition ~twork Gr;mT,~rs, Lecture Notes in Co.muter Scion.e, Goos, G. and ~ s , J., ed., :97~. Kahler, T.P., SNOPA.R: A Grammar Testing System, AJCL 55, 1976. l-bods~ C.A., ADEPT - Testing System for A~gmanred TrarsicLon ~=work Gr~-~s, l~sters Thesis, V'L~ginia Tech, 1979. l.~.isd~edel. R.M., Voge, ~.,LM., J~, M., An Ard/-icial Inralligmce ~ to Language Instr.=- el=m, Arzificial Intelligm%ce, Vol. i0, No. 3, 1978. Marrifield, I./i11"~-~ R., Co~s~.~ M. Naish, Calvin R..Rensch, Gilliam Story, Laboratory M~r~Jal for .P~rDhol~ and Syntax, 1967. ErrS, ,Ross, 'Transi=ion Network Gr~-~aT of Cor~baDo Hazzbo. ' SL~dias in Fnilippine ~=Lcs, edited by Casilda F_.drial-TJ,~,-~-res and Ai..lstil'% l~J.e. Volume 3, Number 2. Manile: S,,--~ LnsCiCute of Li~ tics. 1979. 126
1980
31
Interactive Discourse: Looking to the Future Panel Chair's Introduction Bonnie Lynn Webber University of Pennsylvania In any technological field, both short-term and long- term research can be aided by considering where that technology might be ten, twenty, fifty years down the pike. In the field of natural language interactive systems, a 21 year vision is particularly apt to con- sider, since it brings us to the year 2001. One well- known vision [I] of 2001 includes the famous computer named Hal - one offspring, so to speak, of the major theoretical and engineering breakthrough in computers that Clarke records as having occurred in the early 1980's. This computer Hal is able to understand and converse in perfect idiomatic English (written and spoken) with the crew of the spacecraft Discovery. And not just task-oriented dialogues, mind you! Hal is a far cry from today's prototype natural language query systems, intelligent CAl-systems, diagnostic as- sistance systems, and Kurzweil machines. For one thing, Hal is not Just responsive: he takes the initiative. His first documented utterance on board the spacecraft Discovery comes at a time when the crewmen Bowman and Poole are engrossed in a fading vision screen image of Poole's family on Earth, on the occasion of Poole's birthday. "Sorrv to interrupt the festivities," said Hal~ "but we have a problem." Not only can Hal converse in perfect idiomatic English, but he is a master of problem context (Panel I) and social context (Panel 2) as well! Now Hal is clearly where we currently are not at, and 2001 is clearly only one man's vision (albeit a very special man). Yet Clarke's depiction of Hal raises sev- eral issues, which along with other ones, provide a cue for the current panel discussion. The issues include: I. Where is it that we want to have, must have, can ex- pect to have, or conversely, should not have ~o have, Natural Language Interactive Systems? 2. Barring Clarke's reliance on the triumph of automat- ic neural network generation, what are the major hurdles that still need to be overcome before Natural Language Interactive Systems become practical? 3. What effects can we expect, deriving from the avail- ability of, what to me seem, almost magical developments in hardware? 4. Are there practical (and acceptable) alternatives to interacting with machines in natural language in the various situations that provide a positive answer to question i? 5. Should we be shooting for spoken Natural Language interactions - either input or output or both - or should we not, like Clarke, go the whole way and expect our machines to read lips as well. REFERENCES [. Clarke, Arthur C., 2001: A Space Odyssey, New Ameri- can Library, 1968. 127
1980
32
PROSPECTS mOR PRACTICAL NATURAL LANGUAGE SYSTEMS Larry R. Harris Artificial Intelligence Corporation Newton Centre, ~ass. 02159 As the author of a "practical" NL data base query system, one of the suggested topics for this panel is of particular interest to me. The issue of what hurdles remain before NL systems become practical strikes particulary close to home. %s someone with a more pragmatic view of NL processing, my feeling is, not surprisingly, that we already have the capability to construct practical ~:L systems. Significant enhancement of existing man- machine communication is possible within the current NL technology if we set our sights appropriately and are willing to take the additional effort to craft systems actually worthy of being used. The missing link isn't a utopian parsing algorithm yet to be discovered. The hurdles to practical NL systems are of a much more conventional variety that require, as Edison said, more perspiration than inspiration. It should be clear that none of my remarks conflict with the obvious fact that NL research has miles to go and that there are innumerable unresolved issues that will continue to require research beyond the foreseeable future. Our understanding of NL has merely scratched the surface, and it is fair to say that we don't even understand what all the problems are, muchless their solution. But by using the powerful techniques that have already resulted from NL research in extremely restricted micro-worlds it is possible to attain a high enough level of performance to be of practical value to a significant user community. It is these highly specialized systems that can be made practical using the existing technology. I will not speculate on when a general NL capability will become practical, nor will I speculate on whether the creation of practical specialized systems will contribute to the creation of a more general capability. The fact that there is a clear need for improved man-machine communication and that current specialized systems can be built to meet that need, is reason enough to construct them. The issue of whether practical specialized NL systems can now be built is, in my opinion, not a debatable issue. Those of us on this panel and other researchers in the field, simply don't have the right to determine whether a system is practical. Only the users of such a system can make that determination. Only a user can decide whether the NL capability constitutes sufficient added value to be deemed practical. Only a user can decide if the system's frequency of inappropriate response is sufficiently low to be deemed practical. Only a user can decide whether the overall NL interaction, taken in toto, offers enough benefits over alternative formal interactions to be deemed practical. If we accept my point that practicality is in the eyes of the user, then we are led to the inescapable conclusion that practical NL systems can now be built, because several commercial users of such a system [Pruitt, O'Donnel] have gone on record stating that the NL capability within the confines of data base query is of significant practical value in their environment. These statements plus the fact that a substantial body of users employ NL data base query in daily productive use clearly meets the spirit of a "practical" NL system. The main point of my remarks is not to debate the semantics of practicality, but to point out that whatever level of utility has been achieved, is due only in small part to the sophistication of the NL component. The utility comes primarily from a custom fitting of the NL component to the exact requirements of the domain; and from the painstaking crafting of the lexicon and grammar to achieve tha necessary density of linguistic coverage. In a sense, practicality is derived from a pragmatic approach that emphasizes proper performance on the vast bulk of rather uninteresting dialog, rather than focusing on the much smaller portion of intellectually challenging input. A NL system that is extrememly robust within well-defined limitations is far more practical than a system of greater sophistication that has large qaps in the coveraqe. ~ttaining this required level of robustness and density of linguistic coverage is not necessarily as intellectually challenging as basic research, nor is it necessarily even worthy of publication. But let's not kid ourselves -- it is absolutely necessary to achieve a practical capability! It has never been clear to me that members of the ACL were interested in practical NL systems, nor is it clear that they should be. But I think that it is fair to say that there aren't many practical NL systems because there aren't very many people trying to build them! I would estimate, on the basis of my experience, that it takes an absolute minimum of 2 years, and probably more like 3 years, to bring a successful research prototype NL system to the level of practicality. This "development" process is well known in virtually all scientific and engineering disciplines. It is only our naivete of software engineering that causes us to underestimate the magnitude of this process. I'm afraid the prospects for practical NL systems look bleak as long as we have many NL researchers and few NL developers. Pruitt, J., "~ user's experience with ROBOT," Proceedings of the Fourth Annual ADABAS rJser's Meeting, April, 1977. O'Donnell, J., "Experience with ROBOT at DuPont," Natural Computer Conference Panel, May, 1980. 129
1980
33
FUTURE PROSPECTS FOR COMPUTATIONAL LINGUISTICS Gary G. Hendrix SRI International Preparation of this paper was supported by the under contract N00039-79-C-0118 with the Naval expressed are those of the author. Defense Advance Research Projects Agency Electronic Systems Command. The views A. Introduction For over two decades, researchers in artificial intelligence and computational linguistics have sought to discover principles that would allow computer systems to process natural languages such as English. This work has been pursued both to further the scientific goals of providing a framework for a computational theory of natural-language communication and to further the engineering goals of creating computer-based systems that can communicate with their~ human users in human terms. Although the goal of fluent machine-based nautral-langusge understanding remains elusive, considerable progress has been made and future prospects appear bright both for the advancement of the science and for its application to the creation of practical systems. In particular, after 20 years of nurture in the academic nest, natural-language processing is beginning to test its wings in the commercial world [8]. By the end of the decade, natural-language systems are likely to be in widespread use, bringing computer resources to large numbers of non-computer specialists and bringing new credibility (and hopefully new levels of funding) to the research community. B. Basis for Optimism My optimism is based on an extrapolation of three major trends currently affecting the field: (~) The emergence of an engineering/applications discipline within the computational- linguistics community. (2) The continuing rapid development of new computing hardware coupled with the beginning of a movement from time-sharing to personal computers. (3) A shift from syntax and semantics as the principle objects of study to the development of theories that cast language use in terms of a broader theory of goal-motivated behavior and that seek primarily to explain how a speaker's cognitive state motivates him to engage in an act of communication, how a speaker devises utterances with which to perform the act, and how acts of communication affect the cognitive states of hearers. C. Th___ee Impact o fEn~ineerin~ The emergence of an engineering discipline may strike many researchers in the field as being largely detached from the mainstream of current work. But I believe that, for better or worse, this discipline will have a major and continuing influence on our research community. The public at large tends, often unfairly, to view a science through the products and concrete results it produces, rather than through the mysteries of nature it reveals. Thus, the chemist is seen as the person who produces fertilizer, food coloring and nylon stockings; the biologist finds cures for diseases; and the physicist produces moon rockets, semiconductors, and nuclear power plants. What has computational linguistics produced that has affected the lives of individuals outside the limits of its own close-knit community? As long as the answer remains "virtually nothing," our work will generally be viewed as an ivory tower enterprise. As soon as the answer becomes a set of useful computer systems, we will be viewed as the people who produce such systems and who aspire to produce better ones. My point here is that the commercial marketplace will tend to judge both our science and our engineering in terms of our existing or potential engineering products. This is, of course, rather unfair to the science; but I believe that it bodes well for our future. After all, most of the current sponsors of research on computational linguistics understand the scientific nature of the enterprise and are likely to continue their support even in the face of minor successes on the engineering front. The impact of an engineering arm can only add to our field's basis of support by bringing in new suport from the commercial sector. One note of caution is appropriate, however. There is a real possibility that as commercial enterprises enter the natural-language field, they will seek to build in-house groups by attracting researchers from universities and nonprofit institutions. Although this would result in the creation of more jobs for computational linguists, it would also result in proprietary barriers being established between research groups. The net effect in the short term might actually be to retard scientific progress. D. The State of Applied Work I. Accessin~ Databases Currently, the most commercially viable task for natural-language processing is that of providing access to databases. This is because databases are among the few types of symbolic knowledge representations that are computationally efficient, are in widespread use, and have a semantics that is well understood. In the last few years, several systems, including LADDER [9], PLANES [29], REL [26], and ROBOT [8], have achieved relatively high levels of proficiency in this area when applied to particular databases. ROBOT has been introduced as a commercial product that runs on large, mainframe computers. A pilot REL product is currently under development that will run on a relatively large personal machine, the HP 9845. This system, or something very much like it, seems likely to reach the marketplace within the next two or three years. Should ROBOT- and REL-like systems prove to be commercial successes, other systems with increasing levels of sophistication are sure to follow. 2. Immediate Problems A major obstacle currently limiting the commercial viability of natural-language access to databases is the problem of telling systems about the vocabulary, concepts and linguistic constructions associated with new databases. The most proficient of the application systems have been hand-tailored with extensive knowledge for accessing just ONE database. Some systems (e.g., ROBOT and REL) have achieved a 131 degree of transportability by using the database itself as a source of knowledge for guiding linguistic processes. However, the knowledge available in the database is generally rather limited. High-performance systems need access to information about the larger enterprise that provides the context in which the database is to be used. As pointed out by Tennant [27], users who are given natural-language access to a database expect not only to retrieve information directly stored there, but also to compute "reasonable" derivative information. For example, if a database has the location of two ships, users will expect the system to be able to provide the distance between them--an item of information not directly recorded in the database, but easily computed from the existing data. In general, any system thatis to be widely accepted by users must not only provide access to database information, but must also enhance that primary information by providing procedures that calculate secondary attributes from the data actually stored. Data enhancement procedures are currently provided by LADDER and a few other hand-built systems. But work is needed to devise means for allowing system users to specify their own database enhancement functions end to couple their functions with the natural-language component. Efforts are now underway (e.g. [26] [13]) to simplify the task of acquiring and coding the knowledge needed to transport high-performance systems from one database to another. It appears likely that soon much of this task can be automated or performed by a database administrator, rather than by a computational linquist. When this is achieved, natural-language access to data is likely to move rapidly into widespread use. E. New Hardware VLSI (Very Large Scale Integration of computer circuits on single chips) is revolutionizing the computer industry. Within the last year, new personal computer systems have been announced that, at relatively low cost, will provide throughputs rivaling that of the Digital Equipment KA-IO, the time-sharing research machine of choice as recently as seven years ago. Although specifications for the new machines differ, a typical configuration will support a very large (32 bit) virtual address space, which is important for knowledge-intensive natural-language processing, and will provide approximately 20 megabytes of local storage, enough for a reasonable-size database. Such machines will provide a great deal of personal computing power at costs that are initially not much greater than those for a single user's access to a time-shared system, and that are likely to fall rapidly. Hardware costs reductions will be particularly significant for the many small research groups that do not have enough demand to justify the purchase of a large, time-shared machine. The new generation of machines will have the virtual address space and the speed needed to overcome many of the technical bottlenecks that have hampered research in the past. For example, researchers may be able to spend less time worrying about how to optimize inner loops or how to split large programs into multiple forks. The effort saved can be devoted to the problems of language research itself. The new machines will also make it economical to bring co 3iderable computing to people in all sectors o f the economy, including government, the military, small business, and to smaller units within large businesses. Detached from the computer wizards that staff the batch processing center or the time-shared facility, users of the new personal machines will need to be more self reliant. Yet, as the use of personal computers spread, these users are likely to be increasingly less sophisticated about computation. Thus, there will be an increasing demand to make personal computers easier to use. As the price of computation drops (and the price of human labor continues to soar), the use of sophisticated means for interacting intelligently with a broad class of computer users will become more and more attractive and demands for natural-language interfaces are likely to mushroom. F. Future Directions for Basic Research i. The Research Base Work on computational linguistics appears to be focusing on a rather different set of issues than those that received attention a few years ago. In particular, mechanisms for dealing with syntax and the literal propositional content of sentences have become fairly wall understood, so that now there is increasing interest in the study of language as a component in a broader system of goal-motivated behavior. Within this framework, dialogue participation is not studied as a detached linguistic phenomenon, but as an activity of the total intellect, requiring close coordination between language-specific and general cognitive processing. Several characteristics of the communicative use of language pose significant problems. Utterances are typically spare, omitting information easily inferred by the hearer from shared knowledge about the domain of discourse. Speakers depend on their hearers to use such knowledge together with the context of the preceding discourse to make partially specified ideas precise. In addition, the literal content of an utterance must be interpreted within the context of the beliefs, goals, and plans of the dialogue participants, so that a hearer can move beyond literal content to the intentions that lie behind the utterance. Furthermore, it is not sufficient to consider an utterance ae being addressed to a single purpose; typically it serves multiple purposes: it highlights certain objects and relationships, conveys an attitude toward them, and provides links to previous utterances in addition to communicating some propositional content. An examination of the current state of the art in natural-language processing systems reveals several deficiencies in the combination and coordination of language-specific and general-purpose reasoning capabilities. Although there are some systems that coordinate different kinds of language- specific capabilities [3] [12] [20] [16] [30] [:7], and some that reason about limited action scenarios [21] [15] [19] [25] to arrive at an interpretation of what has been said, and others that attempt to account for some of the ways in which context affects meaning [7] [I0] [18] [14], one or ~ore of the following crucial limitations is evident in every natural- language processing system constructed to date: Interpretation is literal (only propositional content is determined). The user's knowledge and beliefs are assumed to be idontical with the system's. The user's plans and goals (especially as distinct from those of the system) ere ignored. Initial progress has been made in overcoming some of these limitations. Wilensky [28] has investigated the use of goals and plans in a computer system that interprets stories (see also [22] [4]). Allen and Perrault [l] and Cohen [63 have examined the interaction between beliefs and plans in task-oriented dialogues and have implemented e system that uses 132 information about what its "hearer" knows in order to plan and to recognize a limited set of speech acts (Searle [23] [24]). These efforts have demonstrated the viability of incorporating planning capabilities in a natural-language processing system, but more robust reasoning and planning capabilities are needed to approach the smooth integration of language-specific and general reasoning capabilities required for fluent communication in natural language. 2. Some Predictions Basic research provides a leading indicator with which to predict new directions in applied science and engineering; but I know of no leading indicator for basic research itself. About the best wc can do is to consider the current state of the art, seek to identify central problems, and predict that those problems will be the ones receiving the most attention. The view of language use as an activity of the total intellect makes it clear that advances in computational linguistics will be closely tied to advances in research on general-purpose common-sense reasoning. Hobbs [11], for example, has argued that 10 seemingly different and fundamental problems of computational linguistics may all be reduced to problems of common-sense deduction, and Cohen's work clearly ties language to planning. The problems of planning and reasoning are, of course, central problems for the whole of AI. But computational linguistics brings to these problems its own special requirements, such as the need to consider the beliefs, goals, and possible actions of multiple agents, and the need to precipitate the achievement of multiple goals through the performance of actions with multiple-faceted primary effects. There are similar needs in other applications, but nowhere do they arise more naturally than in human language. In addition to a growing emphasis on general- purpose reasoning capabilities, I believe that the next few years will see an increased interest in natural- language generation, language acquisition, information- science applications, multimedia communication, and speech. Generation: In comparison with interpretation, generation has received relatively little attention as a subject of study. One explanation is that computer systems have more control over output than input, and therefore have been able to rely on canned phrases for output. Whatever the reason for past neglect, it is clear that generation deserves increased attention. As computer systems acquire more complex knowledge bases, they will require better means of communicating their knowledge. More importantly, for a system to carry on a reasonable dialogue with a user, it must not only interpret inputs but also respond appropriately in context, generating responses that are custom tailored to the (assumed) needs and mental state of the user. Hopefully, much of the same research that is needed on planning and reasoning to move beyond literal content in interpretation will provide a basis for sophisticated generation. Acquisition: Another generally neglected area, at least computationally, is that of language acquisition. Berwick [2] has made an interesting start in this area with his work on the acquisition of grammar rules. Equally important is work on acquisition of new vocabulary, either through reasoning by analogy [5] or simply by being told new words [13]. Because language acquisition (particularly vocabulary acquisition) is essential for moving natural-language systems to new domains, I believe considerable resources are likely to be devoted to this problem and that therefore rapid progress will ensue. Information Science: One of the greatest resources of our society is the wealth of knowledge recorded in natural-language texts; but there are major obstacles to placing relevant texts in the hands of those who need them. Even when texts are made available in machine-readable form, documents relevant to the solution of particular problems are notoriously difficult to locate. Although computational linguistics has no ready solution to the problems of information science, I believe that it is the only real source of hope, and that the future is likely to bring increased cooperation between workers in the two fields. Multimedia Communication: The use of natural language is, of course, only one of several means of communication available to humans. In viewing language use from a broader framework of goal-directed activity, the use of other media and their possible interactions with language, with one another, and with general- purpose problem-solving facilities becomes increasingly important as a subject of study. Many of the most central problems of computational linguistics come up in the use of any medium of communication. For example, one can easily imagine something like speech acts being performed through the use of pictures and gestures rather than through utterances in language. In fact, these types of communicative acts are what people use to communicate when they share no verbal language in common. As computer systems with high-quality graphics displays, voice synthesizers, and other types of output devices come into widespread use, an interesting practical problem will be that of deciding what medium or mixture of media is most appropriate for presenting information to users under a given set of circumstances. I believe we can look forward to rapid progress on the use of multimedia communication, especially in mixtures of text and graphics (e.g., as in the use of a natural-language text to help explain a graphics display). Spoken Input: In the long term, the greatest promise for a broad range of practical applications lles in accessing computers through (continuous) spoken language, rather than through typed input. Given its tremendous economic importance, I believe a major new attack on this problem is likely to be mounted before the end of the decade, but I would be uncomfortable predicting its outcome. Although continuous speech input may be some years away, excellent possibilities currently exist for the creation of systems that combine discrete word recognition with practical natural-language processing. Such systems are well worth pursuing as an important interim step toward providing machines with fully natural communications abilities. G. Problems of Technology Transfer The expected progress in basic research over the next few years will, of course, eventually have considerable impact on the development of practical systems. Even in the near term, basic research is certain to produce many spinoffs that, in simplified form, will provide practical benefits for applied systems. But the problems of transferring scientific progress from the laboratory to the marketplace must not be underestimated. In particular, techniques that work well on carefully selected laboratory problems are often difficult to use on a large-scale basis. (Perhaps this is because of the standard scientific practice of selecting as a subject for experimentation the simplest problem exhibiting the phenomena of interest.) 133 As an example of this difficulty, consider knowledge representation. Currently, conventional database management systems (DBHSs) are the only systems in widespread use for storing symbolic information. The AI community, of course, has a number of methods for maintaining more sophisticated knowledge bases of, say, formulas in first-order logic. But their complexity and requirements for great amounts of computer resources (both memory and time) have prevented any such systems from becoming a commercially viable alternative to standard DBMSs. I believe that systems that maintain moaels of the ongoing dialogue and the changing physical context (as in, for example, Gross [7] and Robinson [~9]) or that reason about the mental states of users will eventually become important in practical applications. But the computational requirements for such systems are so much greater than those of current applied systems that they will have little commercial viability for some time. Fortunately, the linguistic coverage of several current systems appears to be adequate for many practical purposes, so commercialization need not wait for more advanced techniques to be transferred. On the other hand, applied systems currently are only barely up to their tasks, and therefore there is a need for an ongoing examination of basic research results to find ways of repackaging advanced techniques in cost- effective forms. In general, the basic science and the application of computational linguistics should be pursued in parallel, with each aiding the other. Engineering can aid the science by anchoring it to actual needs and by pointing out new problems. Basic science can provide engineering with techniques that provide new opportunities for practical application. 134 1. 2. 3. 4. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. REFERENCES Allen, J. & C. Perrault. 1978. Participating in Dialogues: Understanding via plan deduction. Proceedings, Second National Conference, Canadian Society for Computational Studies of Intelligence, Toronto, Canada. Berwick, B. C., 1980. Computational Analogues of Constraints on Grammars: A Model of Syntactic Acquisition. The 18th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylvania, June 1980. Bobrow, D. G., et al. 1977. GUS, A Frame Driven Dialog System. Artificial Intelligence, 8, I~5- 173. Carbonell, J. G. 1978. Computer Models of Social and Political Reasoning. Ph.D. Thesis, Yale University, New Haven, Connecticut. Carbonell, J. G. 1980. Metaphor--A Key to Extensible Semantic Analysis. The 18th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Pennsylvania, June 1980. Cohen, P. 1978. On knowing what to say: planning speech acts. Technical Report No. 118, Department of Computer Science, University of Toronto. January 1978. Grosz, B. J., 1978. Focusing in Dialog. Proceedings of TINLAP-2, Urbana, Illinois, 24-26 July, 1978. L. R. Harris, 1977. User Oriented Data Base Query with the ROBOT Natural Language Query System. Proc. Third International Conference on Very Large Data Bases, Tokyo (October 1977). G. G. Hendrix, E. D. Sacerdoti, D. Sagalowicz, and J. Slocum, 1978. Developing a Natural Language Interface to Complex Data. ACM Transactions on Database Systems, Vol. 3, No. 2 (June 1978). Hobbs, J. 1979. Coherence and coreference. Cognitive Science. Vol. 3, No. I, 67-90. Hobbs, J. 1980. Selective inferencing. Third National Conference of Canadian Society for Computational Studies of Intelligence. Victoria, British Columbia. May 1980. Landsbergen, S. P. J., 1976. Syntax and Formal Semantics of English in PHLIQAI. In Coling 76, Preprints of the 6th International Conference on Computational Linguistics, Ottawa, Ontario, Canada, 28 June - 2 July 1976. No. 21. Lewis, w. H., and Hendrix, G. G., 1979. Machine Intelligence: Research and Applications -- First Semiannual Report. SRI International, Menlo Park, California, October 8, 1979. Mann, W., J. Moore, & J. Levin 1977. A comprehension model for human dialogue. Proceedings, International Joint Conference on Artificial Intelligence, 77-87, Cambridge, Mass. August 1977. Novak, G. 1977. Representations of knowledge in a program for solving physics problems. Proceedings, International Joint Conference on Artificial Intelligence, 286-291, Cambridge, Mess. August 1 977. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 3O. Patrick, S. R. 1978. Automatic Syntactic and Semantic Analysis. In Proceedings of the Interdsciplainary Conference on Automated Text Processing (Bielefeld, German Federal Republic, 8- 12 November 1976). Edited by J. Petofi and S. Allen. Reidel, Dordrecht, Holland. Reddy, D. R., et al. 1977. Speech Understanding Systems: A Summary of Results of the Five-Year Research Effort. Department of Computer Science. Carnegie-Mellon University, Pittsburgh, Pennsylvania, August, 1977. Rieger, C. 1975. Conceptual Overlays: A Mechanism for the Interpretation of Sentence Meaning in Context. Technical Report TR-554. Computer Science Department, University of Maryland, College Park, Maryland. February 1975. Robinson, Ann E. The Interpretation of Verb Phrases in Dialogues. Technical Note 206, Artificial Intelligence Center, SRI International, Menlo Park, Ca., January 1980. Sager, N. and R. Grishman. 1975. The Restriction Language for Computer Grammars. Communications of the ACM, 1975, 18, 390-400. Schank, R. C., and Yale A.I. 1975. SAM--A Story Understander. Yale University, Department of Computer Science Research Report. Schank, R. and R. Abelson. 1977. Scripts, plans, goals, and understanding. Hillsdale N.J.: Laurence Erlbaum Associates. Searle, J. 1969. Speech acts: An essay in the philosophy of language. Cambridge, England: Cambridge University Press. Searle, J 1975. Indirect speech acts. In P. Cole and J. Morgan (Eds.), Syntax and semantics, Vol. 3, 59-82. New York: Academic Press. Sidner, C. L. 1979. A Computational Model of Co- Reference Comprehension in English. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, Massachusetts. F. B. Thompson and B. H. Thompson, 1975. Practical Natural Language Processing: The REL System as Prototype. In M. Rubinoff and M. C. Yovits, eds., Advances in Computers 13 (Academic Press, New York, 1975). H. Tennant, "Experience with the Evaluation of Natural Language Question Answerers," &Proc. Sixth International Joint Conference on Artificial Intelligene&, Tokyo, Japan (August 1979). Wilensky, R. 1978. "Understanding Goal-Based Stories." Yale University, New Haven, Connecticut. Ph.D. Thesis. D. Waltz, "Natural Language Access to a Large Data Base: an ~Igineering Approach," Proc. 4th Internatioal Joint Conference on Artificial Intelligence, Tbilisi, USSR, pp. 868-872 (September 1975). Woods, W. A., et al. 1976. Speech Understanding Systems: Final Report. BBN Report No. 3438, Bolt Beranek and Newman, Cambridge, Massachusetts. 135
1980
34
NATURAL I.~IGUAGE INTERACTION WITH MACHINES : A PA~SING FAD? 0R THE WAY OF THE FU~"JRE? A. Michael Noll American Telephone and Telegraph Company Basking Ridge, New Jersey 07920 People communicate primarily by two medea: acoustic -- the spoken word; and visual N the written word. It is therefore natural chac people would expect their com--,nications with machines Co likewise use Chess two modes. To a considerable extent, speech is probably the most natural of the natural-language modes. ~ence, a fascination exists with machines thac respond to spoken commands with synthetic speech responses to create a natural-language interactive discourse. However, although vast amounts of research and development effort have been expended in the search for systems that understand human speech and respond with synthetic speech, the goal of the perfect system remains a~ elusive as ever. Syste ms for producing natural-sounding speech for large vocabularies with unrestricted gr--w.-tical structures and for recog- nizing spoken speech for large vocabularies with unlimited gr-~-Cical structures and any humber of talkers are still beyond the scats of linguistics and computer science and technology. Given the problems in the speech domain, ic is not surprising Chat most interactions between people and machines are in the visual mode frequently using alphanumeric keyboards as input and textual display as output. Such visual terminals are already in fairly widespread use in industry and are used for a variety of applications including computer progr-n~ing, text editing, and data-base access. The telephone allows speech celecoa~nications over distance between people. Future visual terminals for the home and businesses will allow textual celecom--,nicacions between people. These visual terminals could also be used co telecommunicate with machines in a way Chat is presently difficult using the telephone and speech. ViewdaCa, or videocex, systems are promised soon for the home and will allow data-base access and transactions with machines and textual messages between people. Some viewdata systems use elaborate tree searches Co reach the desired frame of information. Some people believe that tree searches will be "unnatural" for many users and some other mere-natural language will be ueeded to search and access these data-base sysCeme. One conclusion is Chac the future will see mere choices in mode for teleco~manicacions between people and with machines. The choice of which alternate made will probably be dependent upon the specific application. For example, textual messages might be both easier to enter by keyboard and Co read on a CRT screen than speaking to a recording machine and listening Co a recorded message. However, social chatting might be best over the telephone. However, arranging a dace with a stranger might be less revealing if done in the textual mode. Considerable opportunities exist for basic research to explore the suitability of these alternate modes for different co~nicacions applications. The fascination of technologists with speech-syuchesis chips is about to result in a variety of stand-alone appliances Chat speak. Ovens chat scare when the roast is done, washing machines thac call for the addition of fabric softeners, automobiles chat inform the driver thaC the door is open, and many ocher applications will soon abound in the marketplace. In most of chess applications, synthetic speech will substitute for a lamp or ocher form of visual display. The environment will be polluted with the noise of buzzy synthetic speech. Many of these applications will undoubtedly be little mere than passing fads. BuC in some circumstances synthetic speech will become the way of the future. One example would be synthetic-speech announcements of floors in an elevator thereby eliminatin S crooked necks~ Most of the preceding examples are very restricted in terms of the language used for the interaction with machines. The problem with unrestricted natural language for cor-unicacion with machines is chaC no automatic way has yec beeu discovered Co extract meaning in either the speech or textual mode. The textual mode does eliminate the ueed for acoustic analysis and hence has been more extensively used in most systems for restricted, specialized applica- tions. However, even if either mode were equally near perfect, questions would still arise about user preference for one mode over the other. Thus, in the end the future will be decided by the votes of consumers in the marketplace as they choose from the many options presented by technology. The shrewd enCerpreneur will use consumer preference and needs Co help illuminate in advance the desires and needs of the marketplace. Basic research in linguistics, human behaviour, natural language, and ocher ancillary fields will have an important role in developing solutions and in understanding people's needs and behaviour. 137
1980
35
NATURAL VS. PRECISE CONCISE LANGUAGES FOR HUMAN OPERATION OF COMPUTERS: RESEARCH ISSUES AND EXPERIMENTAL APPROACHES Ben S~eiderman, Department of Computer Science University of Maryland, College Park, MD. This paper raises concerns that natural language front ends for computer systems can limit a researcher's scope of thinking, yield inappropriately complex systems, and exaggerate public fear of computers. Alternative modes of computer use are suggested and the role of psychologically oriented controlled experimentation is emphasized. Research methods and recent experimental results are briefly reviewed. i. INTRODUCTI ON The capacity of sophisticated modern computers to manipulate and display symbols offers remarkable oppor- tunities for natural language co~nunication among people. Text editing systems are used to generate business or personal letters, scientific research papers, newspaper articles, or other textual data. Newer word processing, electronic mail, and computer teleconferencing systems are used to format, distribute, and share textual data. Traditional record keeping systems for payroll, credit verification, inventory, medical services, insurance. or student grades contain natural language/textual data, In these cases the computer is used as a communication medium between humans, which may involve intermediate stages where the computer is used as a tool for data manipulation. Humans enter the data in natural lan- guage form or with codes which represent pieces of text (part number instead of a description, course number instead of a title, etc.). The computer is used to store the data in an internal form incomprehensible to most humans, to make updates or transformations, and to output it in a form which humans can read easily. These systems should act in a comprehensible "tool-like" manner in which system responses satisfy user expec- tations. Several researchers have commented on the impor- tance of letting the user be in control [i], avoiding acausality [2], promoting the personal worth of the individual [3], and providing predictable behavior [4]. Practitioners have understood this principle as well: Jerome Ginsburg of the Equitable Life Assur8nce Society prepared an in-house set of guidelines which contained this powerful claim: '~othing can contribute more to satisfactory system per- formance than the conviction on the part of the terminal operators that they are in control of the system and not the system in control of them. Equally, nothing can be more damaging to satisfactory system opemtion, regardless of how well all other aspects of the imple- mentatlon have been handled, than the operator's con- viction that the terminal and thus ~he @~t.e~ ~re in control, have 'a mind of their own,' or are tugging against rather than observing the operator's wishes." I believe that control over system function and pre- dictable behavior promote the personal worth of the user, provide satisfaction, encourage competence, and stimulate confidence. Many successful systems adhere to these principles and offer terminal operators a useful tool or an effective c~maunication media. An idea which has attracted researchers is to have the computer take coded information (medical lab test values or check marks on medical history forms) and generate a natural language report which is easy to read, and which contains interpretations or suggestions for treatment. When the report is merely a simple textual replacement of the coded data, the system may be accepted by users, although the compact form of the coded data may still be preferable for frequent users. When the suggestions for treatment replace a human decision, the hazy boundary between computer as tool and computer as physician is crossed. Other researchers are more direct in their attempt to create systems which simulate human behavior. These researchers may construct natural language front ends to their systems allowing terminal operators to use their own language for operating the computer. These researchers argue that most terminal operators prefer natural language because they are already familiar with it, and that it gives the terminal operator the great- est power and flexibility. After all , they argue, computers should be easy to use with no learning and computers should be designed to participate in dialogs using natural language. These sophisticated systems may use the natural language front ends for question- answering from databases, medical diagnosis, computer- assisted instruction, psychotherapy, complex decision making, or automatic programming. 2. DANGERS OF NATURAL LANGUAGE SYSTEMS When computer systems leave users with the impression that the computer is thinking, making a decision, repre- senting knowledge, maintaining beliefs, or understanding information I begin to worry about the future of com- puter science. I believe that it is counterproductive to work on systems which present the illusion that they are reproducing human capacities. Such an approach can limit the researcher's scope of thinking, may yield an inappropriately complex system, and potentially exaggerates the already present fear of computers in the general population. 2.1 NATURAL LANGUAGE LIMITS THE RESEARCHER'S SCOPE In constructing computer systems which mimic rather than serve people, the developer may miss opportunities for applying the unique and powerful features of a computer: extreme speed, capacity to repeat tedious operations accurately, virtually unlimited storage for data, and distinctive Input/output devices. Although the slow rate of human speech makes menu selection impractical, high speed computer displays make menu selection an appealing alternative. Joysticks, lightpens or the "mouse" are extremely rapid and accurate ways of selec- tin E and moving graphic symbols or text on a display screen. Taking advantage o~ these and other ~umputer- specific techniques will enable designers to create powerful tools without natural language co~mmnds. Building computer systems which behave like people do, is like building a plane to fly by flapping its wings. Once we get past the primitive imitation stage and understand the scientific basis of this new technology (more on how to do this later), the human imitation strategies will be merely museum pieces for the 21st century, Joining the clockwork human imitations of the 18th century. Sooner or later we will have to accept the idea that computers are merely tools with no more intelligence than a v~oden pencil, If researchers can free themselves of the human imitation game and begin to think about using computers for problem solving in novel ways, I believe that there will be an outpouring of dramatic innovation. 139 2.2 NATURAL LANGUAGE YIELDS INAPPROPRIATELY COMPLEX SYSTEMS Constructing computer systems which present the illusion of human capacities may yield inappropriately complex systems. Natural language interaction wlth the tedious clarification dialog seems arc.hair and ponderous when compared with rapid, concise, and precise database manipulation facilities such as Query-by-example or commercial word processing systems. It's hard to under- stand why natural language systems seem appealing when contrasted with modern interactive mechanisms llke high speed menu selection, light pen movement of icons, or special purpose interfaces which allow the user to directly manipulate their reality. Natural language systems must be complex enough to cope with user actions stemming from a poor definition of system capabilities. Some users may have unrealistic expectations of what the computers can or should do. Rather than asking precise questions from a database system, a user may be tempted to ask how to improve profits, whether a defendant is guilty, or whether a military action should be taken. These questions involve complex ideas, value Judgments, and human responsibility for which computers cannot and should not be relied upon in decision makin 8. Secondly, users may waste time and effort in querying the database about data which is not contained in the system. Codd [5] experienced this problem in his RENDEZVOUS system and labeled it "semantic overshoot." In co--and systems the user may spend excessive time in trying to determine if the system supports the oper- ations they have in mind. Thirdly, the ambiguity of natural language does not facilitate the formation of questions or commands. A precise and concise notation may actually help the user in thinking of relevant questions or effective corm"ands. A small number of well defined operators may be more useful than Ill-formed natural language statements, especially to novices. The ambiguity of natural lang- uage may also interfere with careful thinking about the data stored in the machine. An understanding of onto/into mappings, one-to-one/one-to-many/many-to-many relationships, set theory, boolean algebra, or predicate calculus and the proper no~atlon may he of great assis- tance in formulating queries. Mathematicians (and musicians, chemists, knitters, etc.) have long relied on precise concise notations because they help in problem solving and human-to-human communication. Indeed, the syntax of precise concise query or co~aand language may provide the cues for the semantics of intended opera- tions. This dependence on syntax is strongest for naive users who can anchor novsl s~ntic concepts to the syntax presented. 2.3 NATURAL LANGUAGE G~E~TES MISTRUST, ~G~, FEAR AND ANXIETY Using computer systems which attempt to behave llke humans may be cute the first time they are tried, but the smile is short-lived. The friendly greeting at the start of some computer-assisted instruction systems, computer games, or automated bank tellers, quickly becomes an annoyance and, I believe, eventually leads to mistrust and anger. The user of an automated bank teller machine which starts with "Hello, how can I help you?" recognizes the deception and soon begins to wonder how else the bank is trying to deceive them. Customers want simple tools whose range of functions they understand. A more serious problem arises with systems which carry on a complete dialog in natural language and generate the image of a robot. Movie and television versions of such computers produce anxiety, alienation, and fear of computers taking over. In the long run the public attitude to~rds computers will govern the future of acceptable ~asearch, develop- ment, and applications. Destruction of computer systems in the United States during the turbulent 1960's, and in France Just recently (News~ek April 28, 1980 -- An underground group, the Committee for the Liquidation or Deterrence of Computers claimed responsibility for bomb- ing Transportation Ministry computers and declared: '~e are computer workers and therefore well placed to know the present and future dangers of computer systems. They are used to classify, control and to repress.") reveal the anger and fear that many people associate with computers. The movie producers take their ideas from research projects and the public reacts to com~wn experiences with computers. Distortions or exagger- ations may be made, but there is a legitimate basis to the public's anxiety. One more note of concern before making some positive and constructive suggestions. It has often disturbed me that researchers in natural language usually build sys- tems for someone else to use. If the idea is so good, why don't researchers build natural language systema for their own use. Why not entrust their taxes, home management, calendar/schedule, medical care, etc. to an expert system~ Why not encode their knowledge about their own dlslpline in a knowledge representation fan E- uage? If such systems are truly effective then the developers should be rushing to apply them to their own needs and further their professional career, financial status, or personal needs. 3. HUMAN FACTORS EXPERIMENTATI~ FOR DEVELOPING INTER- ACTIVE SYSTEMS My work with psychologically oriented experiments over the past seven years has made a strong believer in the utility of empirical testing [6]. I believe that we can get past the my-language-is-better-than-your-language or my-system-is-~ore-natural-and-easler-to-use stage of computer science to a more rigorous and disciplined approach. Subjective, introspective Judgments based on experience will always be necessary sources for new ideas, but controlled experiments can be extremely valu- able in demonstrating the effectiveness of novel inter- active mechaniem~ programming language control struc- tures, or new text editing features. Experimental tes- ting requires careful state~ent of a hypothesis, choice of independent and dependent variables, selection and assignment of subjects, administration to minimize bias, statistical analysis~ and asaesment of the results. This approach can reveal mistaken assumptions, demon- strate generality, show the relatlvestrength of effects, and provide evilence for a theory of human behavior which may suggest new research. A natural strategy for evaluating the effectiveness of natural language facilities would be to define a task, such as retrieval of ship convoy information or solu- tion of a computational problem, then provide subjects with either a natural language facility or an alterna- tive mode such as a query language, simple programming language, set of co~ands, menu selection, etc. Train- ing provided with the natural language system or the alternative would be a critical issue, itself the sub- ject of study. Subjects would perform the task and be evaluated on the basis of accuracy or speed. In my own experience, I prefer to provide a fixed time interval and measure performance. Since inter-subject vari- ability in task performance tends to be very large, within subjects (also called repeated measures) designs are effective. Su:,~ects perform the task with each mode and the statisical tests compare scores in one mode against the other. To account for learning effects, the expectation that the second time the task is per- formed the subject does better, half the subjects begin with natural language, while half the subjects begin 14C with the alternative mode. This experimental design strategy is known as counterbalanced orderings. If working systems are available, then an on-llne experiment provides the most realistic environment, but problems with operating systems, text editors, sign-on procedures, system crashes, and other failures can bias the results. Experimenters may also be concerned about the slowness of some natural language systems on cur- rently available computers as a biasing factor in such experiments. An alternative would be on-line experi- ments where a human plays the role of a natural language system. This appears to be viable alternative [7] if proper precautions are taken. Paper and pencil studies are a suprisingly useful approach and are valuable since administration is easy. Much can be learned about human thought processes and problem solving methods hy con- trasting natural language and proposed alternatives in paper and pensil studies. Subjects may be asked to write queries to a database of present a sequence of commands using natural language or some alternative mode [9]. There is a growing body of experiments that is helping to clarify issues and reveal problems about human perform- 4. ante with natural language usage on computers. Codd [5] and Woods [8] describe informal studies in user perform- I) ante with their natural language systems. Small and Weldon [7] conducted the first rigorous comparison of natural language with a database query language. Twenty subjects worked with a subset of SEQUEL and an on-llne 2) simulated natural language system to composed queries. Shneiderman [9] describes a similar paper and pencil experlmenn comparing performance with natural language and a subset of SEQUEL. The results of both of these 3) experiments suggest that precise concise database query language do aid the user in rapid formulation of more effective queries. Damerau [I0] reports on a field study in which a function- 4) ing natural language system, TQA, was installed in a city planning office. His system succeeded on 513 out of 788 queries during a one year period. Hershman, Kelly and Miller [ii] describe a carefully controlled experi- ment in which ten naval officers used the LADDER natural 5) language system after a ninety minute training period. In a simulated rescue attempt the system properly res- ponded to 258 out of 336 queries. Critics and supporters of natural language usage can all find heartening and disheartening evidence from these 6) experimental reports. The contribution of these studies is in clarification of the research issues, development of the experimental methodology, and production of guide- lines for developers of interactive systems. I believe 7) that developers of natural language systems should avoid over-emphasizing their tool and more carefully analyze the problem to be solved as well as human capacities. If the goal is to provide an appealing interface for airline reservations, hank transactions, database retrieval, or mathematical problem solving, then the 8) first step should be a detailed review of the possible data structures, control structures, problem decomposi- tions, cognitive models that the user might apply, repre- sentation strategies, and Importance of background know- ledge. At the same time there should be a careful 9) analysis of how the computer system can provide assis- tance by representing and displaying data in a useful format, providing guidance in choosing alternative strategies, offering effective messages at each stage 10) (feedback on failures and successes), recording the history and current status of the problem solving process, and giving the user comprehensible and powerful co,ands. ll) Experimental research will be helpful in guiding devel- opers of interactive systems and in evaluating the impor- tance of the user's familiarity with: i) the problem domain 2) the data in the computer 3) the available commands 4) typing skills 5) use of tools such as text editors 6) terminal hardware such as light pens, special purpose keyboards or unusual display mechanisms 7) background knowledge such as boolean algebra, predicate calculus, set theory, etc. 8) the specific system - what kind of experience effect or learning curve is there Experiments are useful because of their precision, narrow focus, and replicability. Each experiment may be a minor contribution, but, with all its weaknesses, it is more reliable than the anecdotal reports from biased sources. Each experimental result, like a small tile in a mosaic which has a clear shape and color, adds to our image of human performance in the use of computer systems. REFERENCES Cheriton, D.R., Man,Machine interface design for time-sharlng systems, proceedings of the ACM National Conference, (1976), 362-380. Gaines, Brian R. and Peter V. Facey, Some experience in interactive system development and application, Prpceedln~s of the IEEE, 63, 6, (June 1975), 894-911. Pew, R.W. and A.M. Rollins, Dialog Specification Procedure, Bolt Beranek and Newman, Report No. 3129, Revised Edition, Cambridge, Massachusetts, 02138, (1975). Hansen, W.J., User engineering principles for inter- active systems, Proceedings of the Fall Joint Q~mputer Conference, 39, AFIPS Press, Montvale, New Jersey, (1971), 523-532. Codd, E.F., HOW ABOUT RECENTLY? (English dialogue with relational databases using RENDEZVOUS Version i), In B. Shneiderman (Ed.), Databases: 7mproving Usabilltv and Responsiveness, Academic Press, New York, (1978), 3-28. Shneiderman, B., Software Psychology: ~uman Factors in Computer and Information Systems, Winthrop Pub- lishers, Cambridge, HA (1980). Small, D.W. and L.J. Weldon, The efficiency of retrieving information from computers using natural and structured query languages, Science Applications Incorporated. Report SAI-78-655-WA, Arlington,Va., (Sept. 1977). Woods, W.A., Progress in natural language understan- ding - an application to lunar geology, Proceedings of the National Computer Conference, 42, AFIPS Press, Montvale, New Jersey, (1973), 441-450. Shneiderman, B., Improving the human factors aspect of database interactions, ACM Transactions on Data- b~se Systems, 3, 4, (December 1978a), 417-~39. Damerau, Fred J., The Transformational Query Answering System (TQA) operational statistics - 1978, IBM T.J. Watson Research Center RC 7739, Yorktown Heights, N.Y. (June 1979). Hershman, R.L., R.T. Kelly and H.G. Miller, User performance with a natural language query system for command control, Navy Personnel Research and Devel- opment Center Technical Report 79-7, San Diego,CA, (1979). 141
1980
36
NATURAL LANGUAGE AND COMPUTER INTEBFACE DESIGN MURRAY TUROFF DEPARTMENT OF COMPU%'z~ AND IiVFORMATION SCIENCE IIEW JERSEY INSTITUTE OF TECHNOLOGY SOME ICONOCLASTIC ASSERTIONS Considering the problems we have in communicating with other h~rmans using natural language, it is not clear that we want to recreate these problems in dealing with the computer. While there is some evidence that natur- al language is useful in communications among humans, there is also considerable evidence that it is neither perfect nor ideal. Natural language is wordy (redun- dant) and imprecise. Most b,*m,m groups who have a need to communicate quickly and accurately tend to develop a rather well specified subset of natural language that is highly coded and precise in nature. Pilots and po- lice are good examples of this. Even working groups within a field or discipline tend over time to develop a jargon that minimizes the effort of communication and clarifies shared precise meanings. It is not clear that there is any group of humans or applications for computers that would be better served in the long run by natural language interfaces. One could provide such an interface for the purpose of ac- climating a group or individual to a computer or in- formation system environment but over the long run it would be highly inefficient for a h,mAn to continue to use such an interface and would in a real sense be a disservice to the user. Those retrieval systems that allow natural language like queries tend to also allow the user to discover with practice the embedded inter- face that allows very terse and concise requests to be made of the system. Take the general example of COBOL, which was designed as a language to input business oriented programs into a computer that could be under- stood by non-computer types. We find that if we don't de,and that progrmmmers follow certain standards to make this possible, they will make their programs cryptic to the point where it is not understandable to anyone but other progro,,mers. It is interesting to observe that successful inter- faces between persona and machines tend to be based upon one or the other of the two extreme choices one can make in designing a language. One is small, well defined vocabularies from which one can build rather long and complex expressions and the other is large vocabularies with short expressions. In some sense, "natural language" is the result of a compromise be- tween these two opposing extremes. If we had same better understanding of the cognitive dynamics that shape and evolve natural language, perhaps the one useful natural language interface that migjat be de- veloped would allow individuals and groups to shape their own personalized interface to a computer or in- formation system. I em quite sure that given such a powerful capability, what a group of users would end up with would be very far from a natural language. The argument is sometimes made that a natural language interface might be useful for those who are linguisti- cally disadvantaged. It might allow very young child- ten or deaf persons to better utilize the computer. I see it as immoral to provide a natural language intro- duction to computers to people who might mistakenly come to think of a computer as they would another hu- man being. I would much prefer such individuals to be introduced to the computer with an interface that will give them some appreciation for the nature of the ma- chine. For example, a very simple CAI language called PILOT has been used to teach grammar school children how to write simple lessons for their classmates. The ability of the young children to write simple question answer sequences and then see them executed as if the computer was able to use natural language is, I be- lieve, far more beneficial to the child than giving him canned lessons as his or her first impression of what a computer is like. COMPUTERIZED CONFERENCING Since 1973 at the New Jersey Institute of Technology, we have been developing and evaluating the use of a computer as a direct aid to facilitating human communi- cation. The basic idea is to use the processing and logical capabilities of the computer to aid in the communication and exchange of written text (Hiltz & Turoff, 1978). As part of this program we have been operating the Electronic Information Exchange System (EIES) as a source of field trial data and as a labora- tory for controlled experimentation. Currently, EIES has approximately 600 active users internationally. Our current rate of operation is about 5,000 user hours a month; 8,000 messages, conference c~-,-ents and note- book pages written a month and about 35,000 delivered each month. The average message is about l0 lines of text and the average comment or page is about 20 lines of text. EIES offers the user a complete set of differing inter- faces including menus, commands, self-defined commands and self progra,,m4ng of interfaces for individuals and groups. In addition to the standard message, confer- ence and notebook features, EIES has been designed with the incorporation of a computer language called "INTE~- ACT" that allows special communication structtkres and data structures to be integrated into the application of any specific group. Much of this capability has evolved since 1976 through a numerous set of alterna- tive feedback and evaluation mechanisms. Our users include scientists, engineers, managers, secretaries, teenagers, students, Cerebral Palsy children and 80 year old senior citizens. In all this experience we have yet to hear a direct request or even implicit desire for any sor~ of natural language like interface. To the contrary, we have indirect empirical data that supports the premise that a natural language llke interface would be a disadvantage. For the most • part, the behavior of users on EIES is very sensitive to the degree of experience they have had with the system. However, there is one key parameter which is insensitive to the degree, of experience or the rate of use of the system. This is the number of items a user receives when he or she sits down at the terminal to use the system. This number stays at around 7 plus or minus 2. This is obviously a prescriptive effect the system has on the user as they get into the habit of signing on often enough so that they will not have more than around 7 new text items waiting for them. Users who have been cut off for a long period by a broken terminal or a vacation that denies them access usually give ou~ textual screams of "information over- load" when they find tons of tex~ items waiting for them. In a real sense, it is natural language that is generating this information overload for the user. Another pertinent observation is that each user has three unique identifiers; a full name, a short nicK- name, and a three digit number. Some users always use nicknames and some always use numbers to address their messages but I have yet to encounter anyone who uses full heroes on a regular basis. AUTOMATED ABSTRACTING Our observations do point to one application where the ability to process natural language would be a signi- ficemt augmentation of the users cf computerized ccn- ferencing systems. We have a large number of confer- ences that have been going on for over a year and which conta/n thousands of comments. While a person entering such an on-going discussion can, in principle, go back and read the entire transcript or do selective retriev- al on subtopics, it would be far preferable to be able to generate autc~a~ic summaries of such large text files. Even for regular use, the ability to zet auto- mated su.~maries would significantly raise the threshold of information overload and allow users to increase their level of co.-.unication activity and the amount of information with which they can deal meaningfully. The goal of being able to process natural language has always been a bit of a siren's call and has a cerma.in note of purity about it. Those striving for it some- times lose sight of the fact thst an imperfect system may still be quite useful when the perfect system may be unobtainable for some time. One of the important problems well recognized in the computer field is teaching computers how to "forget" or eliminate gar- bage. A less well recognized problem is the one of teaching a computer how to "give up" gracefully and go to a human to get help. In other words, the natural language systems that may have significant payoff in the next decade are those that blend the best talents of man and m~chine into one working unit. In the computerized conferencing environment, this means that a person requesting a s u ~ of s long conference probably knows enough about the substance to guide the computer in the process and to tailor the summary to particular needs and interests. In computerized con- ferencing, the ultimate goal is "collective intelli- gence" and one hopes that the apprcpriate design of a communication structure will allow a group of humans to pool their intelligence into something greater than any of its par~s. If there is an automated or artificial intelligence system, then providing that system as a tool to a group of humans as an integral par~ of their group communication structure, the resulting intelli- gence of the group should be greater than the auto- mated system alone. I believe ,a similar observation holds for the processing of natural language. Too often those working in natural language seem to feel that in- tegrating humans into the analysis process would be an impurity or contaminant. In fact, it may be the higher goal than mere automation. WRITING STYLE A related area with respect to computerized confer- encing is the observation that the style of writing in this medium of co~mluicaticn differs from other uses of the written or spoken version of natural language. First of all, there is a strong tendency to be concise and to outline complex discussions. We can observe this directly in the field trials and also observe that users bring group pressure upon those who star~ to write verbose items or items off the subject of inter- est to the group. The mechanism most commonly em- ployed is the anonymous message. Also, in cur con- trolled experiments on h,..an problem solving (Hiltz, et ai, 1980) we have found that there is no differ- ence in the quality of a solution reached in a face-to- face environment or in a computerized conferencing en- vironment. However, we do observ~ that the computer- ized conferencing groups use appro imately 60% fewer words to do just as good a Job as the face-to-face groups. Using Bales Interaction Process Analyses (content analyses), we have also confirmed signifi- cant differences in the content of the communica~ious. New users go through a learning period in which it may take l0 to 20 hours tc feel comfortable in writing in conferences. We feel this is due to the subconscious recognition that people wTite differently in t2~is medium than in letters, memos or other forms of the written language. The majority of what a new user writes (95%) will be messsges the first five hours of usage and it takes about i00 hours until 25% of their writings are in conferences. Also, it is about i00 hours before they feel comfortable in wTiting larger tex~ items in notebooks. One other aspect in the style change is ~he incorporation cf many non-verbal ques into written form (HA' HA', for example). One cannot see the nod of the head or hear a gentle laugh. Another aspect of natural language processing ~t can aid users in this form of ccamunications is help in overcoming learnin~ curves of this sor~ by being able to process the tex~ of a group and provide a ecmpara- tire analysis to new members of a group so ~hey can more quickly learn the style of the group and feel eel- for%able in cm~mmnicating with the group. One can carry this f~er and ask for abilities to deal in certain levels cf emotion such as : I would like to make my statement sound more anser%-lve. CONCLUSION I do believe that this form of human cn""u~icatlon will become as widespread and as significant as the phone has been to our society. The ~t~e application of natural language processing really lies in this area; however, it is not in the interface to the cure,purer that this futttre rests but rather on the ability of this field to provide h~-ns direct aids in processing the tex~ found in their c~-w, unications. Perhaps the real subject tc address is not the one with which this panel was titled but the problems e{ person-machine interface to natural language processing systems. Or, better yet, person-machine integration within natural language processing. The computer processing of natur- al language needs to becume the tool of the wTiter, editor, translator and reader. It also has to aid us in improving our ability to co~unicate. Most organi- zations are run on cammunications and the lore that is contained in those c~---unications. With the increasing use of camputers as communication devices, the qualita- tive information upon which we depend becomes as avail- able for processing as the quantitative has been. Re ference : THE NETWORK NATION: H,---~ C--munication Via Computer, Start Rcxanne Hiltz and Murray Turoff, Addison-Wesley Advanced Book Program, 1978. FACE TO FACE VS. COMPUTERIZED CON~It~NCING: A con- trolled Experiment, Hiltz, Johnson, Aronovitch and Turoff, Report of the C~uterized Confereneing and Communications Center, NJ!T, January 1980.
1980
37
WORD, PHRASE AND SENTENCE Kob't F. Sinnnons Univ. of Texas, Austin Among the relative verities of natural language processing are the facts that morphemes and words are primary semantic units, and that their co-ocurrence in phrases and sentences provides cues for selecting sense meanings. In this session, two psycholinguistic studies show some aspects of how human subjects process words while reading. A study of medical vocabulary shows that medical words are highly associated by co-occurrance in medical definitions. Another report shews the effectiveness of keyword identification and selection of prominent sentences to organize abstracts for retrieval. A fifth study argues that analysis of existing natural language dictionaries can be expected to contribute importantly to what is needed for text understanding programs. The final study is an experiment with a sentence level translator applied to a large German-English translation task. These two studies are primarily concerned with analysis of language at the sentence level. The most glamourous areas of natural language research are at levels above the sentence, concerned with dialogues and discourse, frequently disdainful of morphological or even grammatical analysis in their search for effective structures for understanding what the discourse is about. Scripts, frames, stereotypes, schemas are all studied in these areas; and often morphological and gra~natical analysis is bypassed in favor of keyword scanning to extract some small relevant portion of the text to be bound as values for slots in these larger data forms. This session reminds us that much can be accomplished with vocabulary analysis, with keyword scanning and statistical treatment of text and with semantic analysis at the single sentence level. Yet, with regard to most of the topics in this and other sessions, there is a stronK sense of de~a vu; the earliest natural language studies featured automatic extracting and information retrieval based on statistical, lexical and associational properties of keywords. Mechanical translation of sentences without regard for larger contexts marked the late sixties high point of MT research amid contemporaneous studies of the English dictionary and thesaurus. Competition among sentence parsing algorithms is an ACL tradition celebrated annually, while psycholinguistics has traditionally applied chronometric studies, and recordings of eye movements to measure this or that aspect of human linguistic processing throughout the period. This is not to suggest that nothing new is happening; actually, the continued emphasis on these topics reveals that, though introduced early, they are still imperfectly understood. Z believe science progresses in spirals; initial studies are accomplished and published supporting more advanced studies that build upon the findings of the earlier work. Superstructures of theory are constructed and more work is undertaken in this framework. Finally the initial studies are lost in years of accumulated literature, and perhaps some of the wildest theories begin to collapse. Then the field may suddenly show renewed interest in its beginnings and repeat its early studies with the added sophistication gained by experience. At this time the line of history spirals past the points it reached on earlier cycles. Hopefully, as in this session, the experience gained between cycles insures an upward progression rather than a profitless loop. 145
1980
38
REPRESENTATION OF TEXTS FOR INFORMATION RETRIEVAL N.J. Belkin, B.G. Michell, and D.G. Kuehner University of Western Ontario The representation of whole texts is a major concern of the field known as information retrieval (IR), an impor- taunt aspect of which might more precisely be called 'document retrieval' (DR). The DR situation, with which we will be concerned, is, in general, the following: a. A user, recognizing an information need, presents to an IR mechanism (i.e., a collection of texts, with a set of associated activities for representing, stor- ing, matching, etc.) a request, based upon that need hoping that the mechanism will be able to satisfy that need. b. The task of the IR mechanism is to present the user with the text(s) that it judges to be most likely to satisfy the user's need, based upon the request. c. The user examines the text(s) and her/his need is satisfied completely or partially or not at all. The user's judgement as to the contribution of each text in satisfying the need establishes that text's usefulness or relevance to the need. Several characteristics of the problem which DR attempts to solve make current IR systems rather different from, say, question-answering systems. One is that the needs which people bring to the system require, in general, responses consisting of documents about the topic or problem rather than specific data, facts, or inferences. Another is that these needs are typically not precisely specifiable, being expressions of an anomaly in the user's state of knowledge. A third is that this is an essentially probabilistic, rather than deterministic situation, and is likely to remain so. And finally, the corpus of documents in many such systems is in the order of millions (of, say, journal articles or ab- stracts), and the potential needs are, within rather broad subject constraints, unpredictable. The DR situ- ation thus puts certain constraints upon text represen- tation and relaxes others. The major relaxation is that it may not be necessary in such systems to produce representations which are capable of inference. A con- straint, on the other hand, is that it is necessary to have representations which ca~ indicate problems that a user cannot her/himself specify, and a matching system whose strategy is to predict which documents might re- solve specific anomalies. This strategy can, however, be based on probability of resolution, rat.her than cer- tainty. Finally, because of the large amount of data,. it is desirable that the representation techniques be reasonably simple computationally. Appropriate text representations, given these con- Straints, must necessarily be of whole texts, and prob- ably ought to be themselves whole, unitary structures, rather than lists of atomic elements, each treated sep- arately. They must be capable of representing problems, or needs, as well as expository texts, and they ought to allow for some sort of pattern matching. An obvious general schema within these requirements is a labelled associative network. Our approach to this general problem is strictly prob- lem-oriented. We begin with a representation scheme which we realize is oversimplified, but which stands within the constraints, and test whether it can be pro- gressively modified in response to observed deficien- cies, until either the desired level of performance in solving the problem is reached, or the approach is shown to be unworkable. We report here on some lingu/stical- ly-derived modifications to a very simple, but neverthe- less psychologically and linguistically based word-co- occurrence analysis of text [i] (figure I). POSITION RANK (r) Adjacent 1 Same Sentence 2 Adjacent Sentences 3 FOR EACH CO-OCCURRENCE OF EACH WORD PAIR (Wl,W 2) 1 SCORE = 1 + r X i00 FOR ALL CO-OCCURRENCES OF EACH WORD PAIR IN TEXT ASSOCIATION STRENGTH = SUM (SCORES) Figure I. Word Association Algorithm The original analysis was applied to two kinds of texts : abstracts of articles representing documents stored by the system, and a set of 'problem statements' represent- ing users' information needs -- their anomalous states of knowledge -- when they approach the system. The analysis produced graph-like structures, or association maps, of the abstracts and problem statements which were evaluated by the authors of the texts (Figure 2) (Figure 3). CLUSTERING LARGE FILES OF DO~NTS USING THE SINGLE-LINK METHOD A method for clustering large files of documents using a clustering algorithm which takes O(n**2) operations (single-link) is proposed. This method is tested on a file of i1,613 doc%unents derived from an operational system. One prop- erty of the generated cluster hierarchy (hier- archy con~ection percentage) is examined and it indicates that the hierarchy is similar to those from other test collections. A comparison of clustering times with other methods shows that large files can be cluStered by single- link in a time at least comparable to various heuristic algorithms which theoretically require fewer operations. Figure 2. Sample Abstract Analyzed In general, the representations were seen as being ac- curate reflections of the author's state of knowledge or problem; however, the majority of respondents also felt that some concepts were too strongly or weakly comnected, and that important concepts were omitted (Table i). We think that at least some of these problems arise because the algorithm takes no account of discourse structure. But because the evaluations indicated that the algorithm produces reasonable representations, we ha%~ decided to amend the analytic structure, rather than abandon it completely. 147 TIM COMPAR ALGORITHM ~\ ~ \ -. 15 VI,'\ ., \ / : ' , \ o~.RAT-- - "- V \ \ X ~ M~fHOD N k \ \ TEST LINK = Strong Associations = Medium Associations -- -- -- - Weak Associations Figure 3. Table i. Oues tion i. ACCURATE REFLECTION? 2. (a) CONCEPTS TOO STRONGLY CONNECTED? (b) CONCEPTS TOO WEAKLY CONNECTED? 3. CONCEPTS OMITTED? 4. IF NO OR ' INTERM' tO NO. l, WAS ABSTRACT ACCURATE? Association Map for Sample Abstract Abstract Representation Evaluation % YES % NO % % NO INTERM. RESP. 48.0 29.6 22.0 N=30 63.0 37.0 Nffi30 96.3 3.7 N=30 88.9 11.1 N-30 64.3 7.1 21.4 7.1 N=14 Our current modifications to the analysis consist pri- marily of methods for translating facts about discourse structure into rough equivalents within the word-co- occurrence paradigm. We choose this strategy, rather than attempting a complete and theoretically adequate discourse analysis, in order to incorporate insights about discourse without violating the cost -d volume constraints typical of DR systems. The modi~,cations are designed to recognize such aspects of discourse structure as establishment of topic; "setting of context; summarizing; concept foregrounding; and stylistic vari- ation. Textual characteristics which correspond with these aspects Include discourse-initial and discourse- final sentences; title words in the text: equivalence relations; and foregrounding devices (Figure 4). i. Repeat first and last sentences of the text. These sentences may include the more important con- cepts, and thus should be more heavily weighted. 2. Repeat first sentence of paragraph after the last sentence. To integrate these sentences more fully into ~he overall structure. 3. Make the title the first and last sentence of the text, or overweight the score for each cO-OCcurrence containing a title word. Concepts in the title are likely to be the most im- portant in the text, yet are unlikely to be used often in the abstract. 4. Hyphenate phrases in the input text (phrases chosen algorithmically) and then either: a. Use the phrase only as a unit equivalent to a single word in the co-occurrence analysis ; or b. use any co-occurrence with either member of the phrase as a co-occurrence with the phrase, rather than the individual word. This is to control for conceptual units, as opposed to conceptual relations. 5. Modify original definition of adjacency, which counted stop-list words, to one which ignores stop- list words. This is to correct for the distortion caused by the distribution of function words in the recognition of multi-word concepts. Figure 4. Modifications to Text Analysis Program We have written alternative systems for each of the pro- posed modifications. In this experiment the original corpus of thirty abstracts (but not the prublem state- ments) is submitted to all versions of the analysis pro- grams and the results co~ared to the evaluations of the original analysis and to one another. From the compar- isons can be determined: the extent to which discourse theory can be translated into these terms; and the rela- tive effectiveness of the various modifications in im- proving the original representations. Reference i. Belkin, N.J., Brooks, H.M., and Oddy, R.N. 1979. Representation and classification of knowledge and information for use in interactive information re- trieval. In Human Aspects of Information Science. Oslo: Norwegian Library School. 148
1980
39
Metaphor - A Key to Extensible Semantic Analysis Jaime G. Carbonell Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Interpreting metaphors is an integral and inescapable process in human understanding of natural language. This paper discusses a method of analyzing metaphors based on the existence of a small number of generalized metaphor mappings. Each generalized metaphor contains a recognition network, a basic mapping, additional transfer mappings, and an implicit intention component. It is argued that the method reduces metaphor interpretation from a reconstruction to a recognition task. Implications towards automating certain aspects of language learning are also discussed, t 1. An Opening Argument A dream of many computational linguists is to produce a natural language analyzer that tries its best to process language that "almost but not quite" corresponds to the system's grammar, dictionary and semantic knowledge base. In addition, some of us envision a language analyzer that improves its performance with experience. To these ends, I developed the proiect and integrate algorithm, a method of inducing possible meanings of unknown words from context and storing the new information for eventual addition to the dictionary [1]. While useful, this mechanism addresses only one aspect of the larger problem, accruing certain classes of word definitions in the dictionary. In this paper, I focus on the problem of augmenting the power of a semantic knowledge base used for language analysis by means of metaphorical mappings. The pervasiveness of metaphor in every aspect of human communication has been convincingly demonstrated by Lakoff and Johnson [4}, Ortony [6], Hobbs [3] and marly others. However, the creation of a process model to encompass metaphor comprehension has not been of central concern? From a computational standpoint, metaphor has been viewed as an obstacle, to be tolerated at best and ignored at worst. For instance, Wilks [9] gives a few rules on how to relax semantic constraints in order for a parser to process a sentence in spite of the metaphorical 1This research was sponsored in part by the Defense Advanced Research Prelects Agency (DOD). Order No. 3597, monitored by the Air Force Avionics Laboratory under Contract F33615-78-C-155t. The views and conclusions contained in this document are those of the author, and should not be interpreted as rel3resenting the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government. 2Hobbs has made an initial stab at this problem, although h=s central concern appears to be ~n characterizing and recognizing metaphors in commonly-encountered utterances. usage of a particular word. I submit that it is insufficient merely to tolerate a metaphor. Understanding the metaphors used in language often proves to be a crucial process in establishing complete and accurate interpretations of linguistic utterances. 2. Recognition vs. Reconstruction - The Central Issue There appear to be a small number of general metaphors (on the order of fifty) that pervade commonly spoken English. Many of these were identified and exemplified by Lakoff and Johnson [4]. For instance: more-is-up. less.is.down and the conduit metaphor - Ideas are objects, words are containers, communication consists of putting objects (ideas) into containers (words), sending the containers along a conduit (a communications medium. such as speech, telephone lines, newspapers, letters), whereupon the recipient at the other end of the conduit unpackages the objects from their containers (extracts the ideas from the words). Both of these metaphors apply in the examples discussed below. The computational significance of the existence of a small set of general metaphors underlies the reasons for my current investigation: The problem of understanding a large class of metaphors may be reduced from a reconstruction to a recognition task. That is, the identification of a metaphorical usage as an instance of one of the general metaphorical mappings is a much more tractable process than reconstructing the conceptual framework from the bottom up each time a new metaphor-instance is encountered. Each of the general metaphors contains not only mappings of the form: "X is used to mean Y in context Z", but inference rules to enrich the understanding process by taking advantage of the reasons why the writer may have chosen the particular metaphor (rather than a different metaphor or a literal rendition). 3. Steps Towards Codifying Knowledge of Metaphors t propose to represent each general metaphor in the following manner: A Recoanition Network contains the information necessary to decide whether or not a linguistic utterance is an instantiation of the general metaphor. On the first-pass implementation I will use a simple discrimination network. The Basic MaDoinQ establishes those features of the literal input that are directly mapped onto a different meaning by the metaphor. Thus, Any upward movement in the more-is-up metaphor is mapped into an increase in some directly 17 Quantifiable feature of the part of the input that undergoes the upward movement. The Implicit.intention Comoonent encodes the reasons why this metaphor is typically chosen by a writer or sPeaker. Part of this information becomes an integral portion of the semantic representational of input utterances. For instance, Lakoff identifies many different metaphors for love: love-is-a-journey, love-is-war, love-is.madness, love-is-a-patient, love.is-a-physical-force (e.g., gravity, magnetism). Without belaboring the point, a writer chooses one these metaphors, as a function of the ideas he wants to convey to the reader. If the understander is to reconstruct those ideas, he ought to know why the particular metaphor was ChOSen. This information is precisely that which the metaphor conveys that is absent from a literal expression of the same concept. (E.g.. "John is completely crazy about Mary" vs. "John loves mary very much". The former implies that John may exhibit impulsive or uncharacteristic behavior, and that his present state of mind may be less permanent than in the latter case. Such information ought to be stored with the love-is-madness metaphor unless the understanding system is sufficiently sophisticated to make these inferences by other means.) • A Transfer Maooino, analogous to Winston's Transfer Frames [10], is a filter that determines which additional Darts of the literal input may be mapDed onto the conceptual representation, and establishes exactly the transformation that this additional information must undergo. Hence, in "Prices are soaring", we need to use the basic maDDing of the more-is.up metaphor to understand that prices are increasing, and we must use the transfer map of the same metaphor to interpret "soar" ( = rising high and fast) as large increases that are happening fast. For this metaphor, altitude descriptors map into corresponding Quantit~ descriptors and rate descriptors remain unchanged. This information is part of the transfer maDDing. In general, the default assumption is that all descriptors remain unchanged unless specified otherwise - hence, the frame problem {5] is circumvented. 4. A Glimpse into the Process Model The information encoded in the general metaphors must be brought to bear in the understanding process. Here, 1 outli,'q the most direct way to extract maximal utility from the general.metaphor information. Perhaps a more subtle process that integrates metaphor information more closely w h other conceptual knowledge iS required. An attempt to implement this method in the near future will serve as a pragmatic measure of its soundness. The general process for applying metaphor-mapping knowledge is the following: 18 1. Attempt to analyze the input utterance in a literal, conventional fashion. If this fails, and the failure is caused by a semantic cese-constraint violation, go to the next step. (Otherwise, the failure is probably not due to the presence of a metaphor.) 2. Apply the recognition networks of the generalized metaphors. If on e succeeds, then retrieve all the information stored with that metaphorical maDDing and go on to the next step. (Otherwise, we have an unknown metaphor or a different failure in the originai semantic interpretation. Store this case for future evaluation by the system builder.) 3. Use the basic maDDing to establish the semantic framework of the input utterance. 4. Use the transfer maDDing to fill the slots of the meaning framework with the entities in the input, transforming them as specified in the transfer map. If any inconsistenc=es arise in the meaning framework, either the wrong metaphor was chosen, or there is a second metaphor in the input (or the input is meaningless). 5. Integrate into the semantic framework any additional information found in the implicit-intention component that does not contradict existing information. 6. Remember this instantiation of the general metaphor within the scope of the present dialog (or text). It is likely that the same metaphor will be used again with the same transfer mappings present but with additional information conveyed. (Often one participant in a dialog "picks up" the metaphors used by by the other participant. Moreover, some metaphors can serve to structure an entire conversation.) 5. Two Examples Brought to Light Let us see how to apply the metaphor interpretation method to some newspaper headlines that rely on complex metaphors. Consider the following example from the New York Times: Speculators brace for a crash in the soaring gold market. Can gold soar? Can a market soar? Certainly not by any literal interpretation. A language interpreter could initiate a complex heuristic search (or simply an exhaustive search) to determine the most likely ways that "soaring" could modify gold or gold markets. For instance, one can conceive of a spreading.activation search starting from the semantic network nodes for "gold market" and "soar" (assuming such nodes exist in the memory) to determine the minimal.path intersections, much like Quillian originally proposed {7]. However, this mindless intersection search is not only extremely inefficient, but will invariably yield wrong answers. (E.g., a golcl market ISA market, and a market can sell fireworks that soar through the sky - to suggest a totally spurious connection.) A system absolutely requires knowledge of the mappings in the more-is.ul~ metaphor to establish the appropriate and only the appropriate connection. In comparison, consider an application of the general mechanism described in the previous section to the "soaring gold market" example. Upon realizing that a literaJ interpretation fails, the system can take the most salient semantic features of "soaring" and "gold markets" and apply them to the recognition networks of the generaJ metaphors. Thus, "upward movement" from soaring matches "up" in the more-is.up metaphor, while "increase in value or volume" of "gold markets" matches the "more" side of the metaphor. The recognition of our example as an instance of the general more-is-up metaphor establishes its basic meaning. It is crucial to note that without knowledge that the concept up (or ascents) may map to more (or increases), there appears to be no general tractable mechanism for semantic interpretation of our example. The transfer map embellishes the original semantic framework of a gold market whose value is increasing. Namely, "soaring" establishes that the increase is rapid and not firmly supported. (A soaring object may come tumbling down -> rapid increases in value may be followed by equally rapid decreases). Some inferences that are true of things that soar can also transfer: If a soaring object tumbles it may undergo a significant negative state change -> the gold market (and those who ride it) may suffer significant neaative state chan.qes. However, physical states map onto financial states. The less-is-down half of the metaphor is, of course, also useful in this example, as we saw in the preceding discussion. Moreover. this half of the metaphor is crucial to understand the phrase "bracing for a crash". This phrase must pass through the transfer map to make sense in the financial gold market world. In fact. it passes through very easily. Recalling that physical states map to financial states, "bracing" maps from "preparing for an expected sudden physical state change" to "preparing for a sudden financial state change". "Crash" refers directly to the cause of the negative physical state change, and it is mapped onto an analogous cause of the financial state change. More-is-up. less-is-down is such a ubiquitous metaphor that there are probably no specific intentions conveyed by the writer in his choice of the metaphor (unlike the love-is-madness metaphor). The instantiation of this metaphor should be remembered in interpreting subsequent text. For instance, had our example continued: Analysts expect gold prices to hit bottom soon, but investors may be in for a harrowing roller-coaster ride. We would have needed the context of: "uP means increaSes in the gold market, and clown means decreases in the same market, which can severely affect investors" before we could hope to understand the "roller-coaster ride" as "unpredictable increases and decreases suffered by speculators and investors". Consider briefly a Second example: Press Censorship is a barrier to free communication. I have used this example before to illustrate the difficulty in interpreting the meaning of the word "barrier". A barrier is a physical object that disenables physical motion through its Location (e.g., "The fallen tree is a barrier to traffic"). Previously I proposed a semantic relaxation method to understand an "information transfer" barrier. However, there is a more elegant solution based on the conduit metaphor. The press is a conduit for communication. (Ideas have been packaged into words in newspaper articles and must now be distributed along the mass media conduit.) A barrier can be interpreted as a physical blockage of this conduit thereby disenabling the dissemination of information as packaged ideas, The benefits of applying the conduit metaphor is that only the original "physical object" meaning of barrier is required by the understanding system. In addition, the retention of the basic meaning of barrier (rather than some vague abstraction thereof) enables a language understander to interpret sentences like "The censorship barriers were lifted by the new regime." Had we relaxed the requirement that a barrier be a physical object, it would be difficult to interpret what it means to "lift" an abstract disenablement entity. On the other hand, the lifting of a physical object implies that its function as a disenabler of physical transfer no longer applies; therefore, the conduit is again open, a~nd free communication can proceed. In both our examples the interpretation of a metaphor to understand one sentence helped considerably in unaerstanding a subsequent sentence that retered to the metaphorical mapping established earlier. Hence, the significance of metaphor interpretation for understanding coherent text or dialog can hardly be overestimated, Metaphors often span several sentences and may structure the entire text around a particular metaphorical mapping (or a more explicit analogy) that helps convey the writer's central theme or idea. A future area of investigation for this writer will focus on the use of metaphors and analogy to root new ideas on old concepts and thereby convey them in a more natural and comprehensible manner. If metaphors and analogies help humans understand new concepts by relating them to existing knowledge, perhaps metaphors and analogies should also be instrumental in computer models that strive to interpret new conceptual information. 19 6. Freezing and Packaging Metaphors We have seen how the recognition of basic general metaphors greatly structures and facilitates the understanding process. However, there are many problems in understanding metaphors and analogies that we have not yet addressed. For instance, we have said little about explicit analogies found in text. I believe the computational process used in understanding analogies to be the same as that used in understanding metaphors, The difference is one of recognition and universality of acceptance in the underlying mappings. That is, an analogy makes the basic mapping explicit (sometimes the additional transfer maps are also detailed), whereas in a metaphor the mapping must be recognized (or reconstructed) by the understander. However, the general metaphor mappings are already known to the understander - he need only recognize them and instantiate them. Analogical mappings are usually new mappings, not necessarily known to the understander. Therefore, such mappings must be spelled out (in establishing the analogy) before they can be used. If a maDDing is often used as an analogy it may become an accepted metaphor; the explanatory recluirement is Suppressed if the speaker believes his listener has become familiar with the maDDing. This suggests one method of learning new metaphors. A maDDing abstracted from the interpretation of several analogies can become packaged into a metaphor definition. The corTesDonding subparts of the analogy will form the transfer map, if they are consistent across the various analogy instances. The recognition network can be formed by noting the specific semantic features whose presence was required each time the analogy was stated and those that were necessarily refered to after the statement of the analogy. The most difficult Dart to learn is the intentional component. The understander would need to know or have inferred the writer's intentions at the time he expressed the analogy. Two other issues we have not yet addressed are: Not all metaphors are instantiations of a small set of generalized metaphor mappings. Many metaphors appear to become frozen in the language, either packaged into phrases with fixed meaning (e.g., "prices are going through the roof", an instance of the more-is-up metaphor), or more specialized entities than the generalized mappings, but not as specific as fixed phrases. I set the former issue aside remarkino that if a small set of general constructs can account for the bulk of a complex phenomenon, then they merit an in-depth investigation. Other metaphors may simpty be less-often encountered mappings. The latter issue, however, requires further discussion. I propose that typical instantiations of generalized metaphors be recognized and remembered as part of the metaphor interpretation process. These instantiations will serve to grow a hierarchy of often.encountered metaphorical mappings from the top down. That is, typical specializations of generalized metaphors are stored in a specialization hierarchy (similar to a semantic network, with ISA inheritance pointers to the generalized concept of which they are specializations). These typical instanceS can in turn spawn more specific instantiations (if encountered with sufficient frequency in the language analysis), and the process can continue until until the fixed-phrase level is reached. Clearly. growing all possible specializations of a generalized maDDing is prohibitive in space, and the vast majority of the specializations thus generated would never be encountered in processing language. The sparseness of typical instantiations is the key to saving space. Only those instantiations of more general me. ~ohors that are repeatedly encountered are assimilated into t, Je hieraruhy. Moreover, the number or frequency of reclui=ed instances before assimilation takes place is a parameter that can be set according to the requirements of the system builder (or user). In this fashion, commonly-encountered metaphors will be recognized and understood much faster than more obscure instantiations of the general metaphors. It is important to note that creating new instantiations of more general mappings is a much simpler process than generalizing existing concepts. Therefore, this type of specialization-based learning ought to be Quite tractable with current technology. 7. Wrapping Up The ideas described in this paper have not yet been implemented in a functioning computer system. I hope to start incorpor,3ting them into the POLITICS parser [2], which is modelled after Riesbeck's rule.based ELI [8]. The philosophy underlying this work is that Computational Linguistics and Artificial Intelligence can take full advantage of - not merely tolerate or circumvent - metaphors used extensively in natural language, in case the reader is still in doubt about the necessity to analyze metaphor as an integral Dart of any comprehensive natural language system, I point out that that there are over 100 metaphors in the above text, not counting the examples. To illustrate further the ubiquity of metaphor and the difficulty we sometimes have in realizing its presence, I note that each section header and the title of this PaDer contain undeniable metaphors. 8. References 1. Carbonell, J. G., "Towards a Self.Extending Parser," Proceedings of the 17th Meeting of the Association for Computational Linguistics. 1979, PD- 3-7. 2. Carbonell, J.G., "POLITICS: An Experiment in Subjective Understanding and Integrated Reasoning," in Inside Computer Understanding: Five Programs Plus Miniatures, R. C. Schank and C. K. RiesPeck, ecls., New Jersey: Erlbaum, 1980. 3. Hobbs, J.R., "Metaphor, Metaphor Schemata, and Selective Inference," Tech. report 204, SRi International, 1979. 4. Lakoff, G. and Johnson, M., Metaphors We Live By. Chicago University Press, 1980. 5. McCarthy, J. and Hayes, P.J., "Some Philosophical Problems from Artificial Intelligence," in Machine Intelligence 6, Meltzer and Michie, eds., Edinburgh University Press, 1969. 6. Ortony, A., "Metaphor," in Theoretical Issues in Reading Comprehension, R. Spire et aL eds., Hillsdale, NJ: Erlbaum, 1980. 7. Ouillian, M.R., "Semantic Memory," in Semantic Information Processing. Minsky, M., ed., MIT Press, 1968. 8. Riesbeck, C. and Schank, R. C., "Comprehension by Computer: Expectation-Based Analysis of Sentences in Context," Tech. report78, Computer Science Department, Yale University, 1976. 20 9, 10. Wilks. Y., "Knowledge Structures and Language Boundaries," Proceedings of the Fifth /nternational Joint Conference on Artificial/ntel/igence. 1977, pp. 151-157. Winston, P., "Learning by Creating and Justifying Transfer Frames," Tech. report AIM-520, AI Laboratory. M.I.T., Jan. 1978. 21
1980
4
WORD AND OBJECT IN DISEASE DESCRIPTIONS* M.S. Blois, D.D. Sherertz, M.S. Tuttle Section on Medical Information Science University of Calif(rnia, San Francisco Experiments were conducted on a book, Current Medical Information and Terminolog~, (AMA, Chicago, 1971, edited by Burgess Gordon, M.D.), which is a compendium of 3262 diseases, each of which is defined by a collection of attributes. The original purpose of the book was to introduce a standard nomenclature of disease names, and the attributes are organized in conventional medical form: a definition consists of a brief description of the relevant symptoms, signs, laboratory findings, and the like. Each disease is, in addition, assigned to one (or at most two) of eleven disease categories which en- umerate physiological systems (skin, respiratory, card- iovascular, etc.). While the editorial style of the book is highly telegraphic, with many attributes being expressed as single words, it is nevertheless easily readable (see Figure i). The vocabulary employed consists of about 19,000 distinct "words" (determined by a lexical definition), roughly divided equally between common English words and medical terms. We measured word frequency by "disease occur- rence", (the number of disease definitions in which a given word occurs one or more times). By this measure, only seven words occurred in more than half the disease definitions, and about 40% of the vocabulary occurred in only a single disease definition. (Table i lists the words at the top of the frequency list together with the number of occurrences.) Assisted by the facilities of the TMuNIX operating sys- tem, we created a series of inverted files (from a magnetic tape of the CMIT text), and developed a set of interactive programs to form a word-and-context query system. This system has enabled us to study the problem of inferring term reference in this large sample of text (some 333,000 word occurrences), within the context of diseases. An interesting early result was the ease with which many medical terms could be algorithmically separated from co~on English words. After adjusting for the fact that some disease categories are larger than others, we de- fined an entropy-like measure of the distribution of word occurrences over the eleven physiological categor- ies as a measure of category specificity. We reasoned that some medical terms such as 'murmur', while not specific to any particular heart disease, are specific to heart disease generally. This term would not, for example, be used in describing endocrine disorders. Such a word would be expected to occur in category 04 (cardiovascular disease) frequently, and not in the other categories. Such a term would, by our measure, have a low 'entropy'. A com~non English word like 'of', would be used in the descriptions of all kinds of dis- ease, and would accordingly have a high 'entropy'. Tables 2 and 3 show the top and bottom of the list of all words occurring in two or more diseases sorted by this entropy measure. In these lists, as our hypothesis seems to imply, low 'entropy' corresponds to high 'specificity', and high 'entropy' to low 'specificity'. This separation of medical terms from common English words, by algorithmic means, is facilitated by the context supplied by the notion of 'disease category', and the fact that this was represented in the CMIT text. * This work was supported in part by grants from The Commonwealth Fund, and from the National Library of Medicine (i KI0 LM00014). Our second experiment investigated the co-occurrence properties of some medical terms. Aware that many medi- cal diagnostic programs have assumed attribute independ- ence, we sought to shed light on the appropriateness of the assumption by evaluating it in terms of word co- occurrence in disease definitions. Since the previously described procedure had given us a means of selecting medical terms from common English words, it was possible to produce lists of 'pure' medical terms. We then wrote a program which formed all pairs of such terms (ignoring order). We defined an 'association measure' (A) which measured the difference between the observed co-occurrences of term-pairs (they could co-occur in any location in the definition and in either order), and the co-occurrences expected from chance alone. Tables 4 and 5 show the top and bottom of a list of all pairs formed from the low entropy terms in the previous experiment. The first 1120 terms were chosen, that is, those having an entropy of 2.0 napiers cr less. The pair list was then sorted by this associa- tion measure, A. Word pairs which are found to be highly associated, appear to do so for two reasons. The test, which is trivial, is that some word pairs are semantically one word despite their being lexically, two. Comon examples would be 'white House' and 'Hong Kong'; medical examples are 'vital capacity', 'axis deviation', and 'slit lamp'. These could have been avoided algorithmic- ally by not taking adjacent words in forming the term- pairs, without any significant overall effect. The second reasons for high frequency word co-occurrence is that both words are causally related through underlying physiological mechanisms. It is these which had the greatest interest for us, and the measure A, may be viewed as a measure of the non-independence of the symp- toms or signs themselves. The term pairs which are negatively associated, have this property for the same reason. If the two terms are used typically in the descriptions of different diseases, they are less likely to co-occur than by chance. (In a baseball story on the sports page, we would not find 'pass', 'punt', or 'tackle'). These negatively assoc- iated pairs may have value in diagnostic programs for the recognition of two or more diseases in a given patient, a problem not satisfactorily dealt with by even the most sophisticated of current programs. Finally, an extension of the entropy concept permits one to generate (algorithmically) the vocabularies used by the medical specialties (which correspond to the disease categories represented in CMIT. This is done by assign- ing terms which occur predominantly in one category to a single vocabulary and then sorting by entropy. Tables 6 and 7 show the vocabularies used in dermatology and gas- troenterology (as derived from CMIT). These vocabular- ies, it will be noted, can be used as 'hit lists' for the purpose of recognizing the content of medical texts. In su~nary, we see the ability to differentiate medical terms from common words by context, and the ability to relate the medical words by meaning, as two of the first steps toward text processing algorithms that preserve and can manipulate the semantic content of words in med- ical texts. TMuNIX is a trademark of Bell Laboratories. 149 COLORADO TICK FEVER 00 2217 AT FEVER, MOUNTAIN; FEVER, MOUNTAIN TICK. ET VIRUS TRANSMITTED BY TICK DERMACENTOR ANDERSONI. SM CHILL~; HEADACHE; PHOTOPHOB~A; BACK- ACHe; PAIN IN EYE; MYALGIA; ANOREXIA; NAUSEA; VOMITING; PROSTRATION. SG SEASONAL, MARCH TO JULI', IN WESTERN UNITED STATES; INCUBATION PERIOD 4-6 DAYS; ONSET ABRUPT; POSSIBLY SLIGHT ~ ; SUSTAINED FEVER, 102-104 F OR HIGHER SIGNIFICANT; PULSE RATE INCREASED. COURSE : IN PREVENTION, R~OVAL OF TICK FRDM SKIN; APPLICA- TIONS TO SKIN OF TURPENTINE, IODINE, R~'TONE; REMOVAL OF TICK BY INSERTION OF NEEDLE BETWEEN MOUTH PARTS; ASPIRIN FOR PAIN; ANTIBIOTIC TREATMENT IN- ~-~CTIVE. CM ENCEPHALITIS, MENINGITIS ESPECIALLY IN CHILDREN. LB WBC DECREASED: MONOCYTOSIS; COmPLE- MENT-FIXATION TEST POSITIVE; INJECT- TZGN OF SERUM OR CSF KILLING SUCKLING MICE; NEUTRALIZATION OF VIRUS WITH II~UNE SERUM RESULTING IN SURVIVAL. Figure i. Typical disease 'definition' ~aken from CMIT 32~6 of 587 stall 364 other 2865 in 492 possible 360 acute 2485 p~msibly 489 se~re 368 years 2315 wir.~ 470 ~St 358 ~silure 2104 course 473 disease 349 ~et~m 2010 to 457 pr~sura 349 large 1953 or 447 a~sence 341 o~spnea 1488 by 44~ trau~e 341 early 1379 usually 443 chronic 34e ~akness 1194 lain 442 edema 339 nausea 900 as 435 ~rcent 338 ~enderness 945 on 434 ~rea~ment 337 infl,emmm~icm 889 from 432 vomitlr~ 337 mass 812 infection 431 later 336 awe 766 features 426 ah-ent 335 w~hLn 749 unknown 422 camll~n 332 Lf 738 at 421 asymp~oma~Lc 331 lower 716 cells 420 durlr~j 328 ~ellinq 699 associated 415 rarely 327 necrosL~ 682 increased 414 hereditary 325 los "Ave 674 onset: 401 lesLons 324 heaOaehe 666 ~ssue 396 ~han 318 frequent 650 bloo~ ~90 a~bominal 316 w~c 627 nor~l 389 more 315 area 619 sKLn 389 often 313 hemrrhac)e 603 and ]83 into 313 infil~ra~Iom 596 ~or ]82 ~pe 309 oh~r.cuccion 575 rare 381 ~one 304 fom 553 fever "5 in~l~P ~nt 301 conqenl~al 541 IOSs 369 ~specially 3~I enlar~e~n~ 538 after 367 areas 391 progressive Table i. The hiqhes~ frequency words used in CMIT, ~oge~er with r~e number of disease definitions in which ~he word occurs a~ leas~ once. u*s162 X*OZO+ Z.bZ1+ Z .(}369 I.(}311 l.Oll[ 1.0422 1.04~1 IoU~I 1.011~5 l.ll3~ 1.12~M L.|261 l.ltl5 1.1504 L.15242 L* II~2 L*l;l+ 1.1112 1.21212 1.2:162 L*2Z92 1*2~95 1.2351 1.231¢ I* ;P~I3 L*lill L.273(} L.2tI3 ~ .30011 1.]019 ~. JlOS2 1.3101 1.3120 |.31~6 I*2115 1.31m L.3211 1.32~+ 1.3269 L.3327 1o33~ 1.338~ 1.2315 [.)378 ! • ~3'J5 L*3a39 L.35~0 L.33(}5 1.3~05 L. ]6~.1 l.3~03 1,3783 .02 •~ .uZ .U2 •vt .,,Z .n; .n; •oJ .r)2 .~Z lens ~lJ .05 .u* .o! •o. + .u2 •U3 .r~2 •uu •OZ .Ul .77 come l,s .++2 •,t .t]2 .,2 .~, .u+ .u2 .ul •7~ .u2 .~3 pbL lu .u~ ., u •ui .~, .o~ +oz .oz .hi .u~ •o+ •o2 .m~.mcor*tlo. s. • OZ .Ol •05 .ul .7~ .£1 .O2 .OI .02 .O! .02 plrmlr 86 • 03 .OI .Ul .02 *IT .03 .02 .n; .0+ .02 .03 *crX.l, 45 .O~ ,OI ,ul .a)2.0Z .u3 .02 .UI .05 ,(}3 .77 c1111ty 33 • U3 .0~ .(}2.0Z .OZ .02 .01 .(}1 .u) .(}$ .76 trXl ~3 • (}L .0| .Of .fl2 .(}) *02 .(}3 .(}I .;6 .(}Z .02 omclke Z(} .(}5 .(}I .01 .(}Z *(}3 *(}3 .~| .02 .76 .03 .02 lodtml 27 • (}i .(} . . . . . . ) .(}2 .... lJ3 ..... ) .Of .O] hlqLo~t, $L .Ul .Of .(}2 .02 *;+ .~ .(}2 .(}Z .(}$ .(}2 .03 q,--,l • 02 .01 *O2 *(}I *H . llb .02 *02 .02 .Of .(}| l~ItOl |+ +I • (}$ .03 *U: .02 .02 .1|, .(}Z .0| *(}3 "toOl .1,~ +,~elllUac°m ,9 .(}; .01 .(}! +72 .+~1 .0; .(}Z .(}2.0Z . .(}! .0~ .00 .U2 .02 .lU .ul .Ol .(}l .12 .(}2 .(}I *©I l~O ..... + ..... , . . . . . . , ..... 5 ..... 3 I~rOlleh,co~? 26 • 07 .U2 .Or .OZ .UU! *02 .l+l .01 .(}3 .0+ .7~ ¢llu~t$) • (}2 .03 .(}I *02 •0~ .02 *(}I .(}I .(}) .O? .72 lltlu 61 • 01 .U3 *(}3 .05 .(}2 .(}+ .0+ .12 .02 .(}Z .0~ ,rechrl; m .0| .0) .0+ +o) .(}2 .U6 .02 .72 .(}5 .0+ .(}3 urllthl~l 58 • Of *OZ *02 .(}) .(}2 .Ok .O~ .72 *(~ *02 .I~ ¢~ll¢Im¢oP7 $3 • 03 .02 .(}~ .(}3 .02 .U* *02 .Of .(}$ .I~ .72 vtC+l~ ~I .02 .1(} .01 ,02 .O2 ,02 .(}2.0C ,(}3 .01.0~ leld/rmts 93 .O3 .01 .U2 *U3 .oa .ul .(}z .12 .05 .o2 .(}) ¢llrvtz i~ .m, .(}~ .o) .(}2 .lo .o~ .o£ .o+, .o~ .(}] .(}2 *tr~U 6s .(}S .(}I .02 .02 ,0~ ,Of .(}I .OL ,0~ .08 .39 vtl&o~ 192 • 02 .U3 .02 .(}3 .D* .Q& .(}2 .(}l .OS .(}2 .11 lncrlocvL4r 23 .O& .0l .(}2 .(}5 .02.0~ .02 .1(} .O5 .(}2 ,05 p~ril ~5 .0~ .ul .0~ .Oa .(}I .05 .(}I .t9 .01 .(}~ .02 ,ter.l .(}| .(}~ .02 .0~ .89 .(}$ .0~ .(}1 .06 .02 .0~ llllo¢lrdtol +iphy )0 • 01 *O! .02 .05 .k4 .02 .(}I *Of *07 .it .(}I mcrlm1+ 15 • (}I .02 .02 .05 .0~ .0~ .05 .05 .i) .U2 .(}3 l ~ m 21 .(}l .61 *(}2 .02 .Ul .02 .06.0a .(}6.0a .0+. +1~1 ~5 .(}I .0] .0~ .OZ .02 .05 .02 .(}1 .*9 .(}3 .02 hormoM ~ .0~ .Ol .ul .61 .O5 .O5 .05 .02 .(}6 *01 .02 mruq 57 .O2 .01 .(}2.0e .~I .0; .~ .01 .(}5 .02 .(}3 ~rll 28 • (}3 .01 .02 .O2 .0~ .08 .02 .68 .(}I .Of .(}2 u/lrlm 98 .Ok .(}2 .+~ .i+ .02 *US ,O~ *03 .0~ ,02 *0& lLVeO~l ~& .(}) .~ .(}2 .(}1 .(}| .02 .OZ .(}7 .GB .05 .05 pt~ultm 52 • U3 .(}2 .Ul .(}3 .Sl .05 .Of .01 .01 .0~ .02 lo¢~l tO .03 ., .03 .020~ .0~ ., .OZ .OI .05 .02 .05 t~l~l~ 22 .02 .02 .02 • .U2 .kl .U3 .02 .07 .03 .0~ • 02 .03 .0~ .02 .02 .03 .0~ .69 .(}& .OL .02 ~|~1 91 • 03 .02 *(}2 .(}3 .bM .(}3 .UZ .03 .0~ .~2 .0~ ~l~lo~ 29 .(}3 .02 .01 .0~ oZO .u3 .03 .(}I .0~ .(}1 .66 ¢~lJ~er &l • (}3 .+J2 .02 .u) +o~ .o& .~2 .O2 .68 .02 .u? hymtlLyml L| .0, .U3 .O2 .o~ .92 .(}2 .01 .00 .05 .(}8 .67 ~* 113 .O4.0L .(}! .0~ .65 .O5 .01 .Ul .~ *07 *(}I v~mtrlcuJ.i/ /tO ,(}2 .05 .(}2 .U2 ,02 .(}+ .02 .(}1 .(}5 .15 .65 ~mLl 29 .|5 .03 .(}I .01 .ut .O2 .01 .91 .07 .08 .61 cornelL 86 • 02 .02 .02 .m .U2 .13 . . . . . . . . . 2 .02 -- 2, • (}2 .02 .02 .03 .65 .12 .0~ .Ol .~1 .02 .(}2 v41Lvl 55 .0~ .~I .Of .02 .65 .U3 .0) .OL .(}~ .OB .05 .+vl ~? • Ul .05 *(}2 *03 .U2.0a .02 *Ol .il .02 .65 ileel 2~ • 02 .02 .02 .65 .02 .0~ .0~ .(}I .£U .OZ .03 mm*oc~oru 28 Table 2. The lowest 'enr.ropy' words in CMIT, in order of increasing 'enT.ropy' The enl=ropy is given in P.he first column; ,':he ent.ries in ~h~ nezt ii columns are ~he percent of occurrences in the 11 disease cat.Dries (body as a whole, skin, musculo-skeletal, respiratory, cardiovas- cular, heroic and lympha1:ic, GI, GU, endo- crine, nervous, organs of special sense)• 150 L, J62b . +0 .J5 .o~ ,tl .13 .U7 . ~ .10 ,ll .07 .13 de~rl! Iz~ z. Jh2v . :~ .06 .US .u9 .]& .13 .C5 .u7 .41 .09 .09 absen¢ +~+ ~.j+Jt} . + .12 .09 .07 .11 .10 .05 .[Jb .13 .11 .ou bLo¢¢~7 z6226 J.3635 5 .uY .LL ,o7 .LU .(J7 .10 *I0 .13 .IL .()7 common Z*~bJ? *L/ .OJJ .00 *(J7 .05 .IL .IU .10 .|~ .0~ .09 ihLn& 4 2.~640 .LU .|0 .19 *(39 *LU *09 .(~9 .09 *~5 .16 .06 wtthln JJ5 Z.Sb4Z ,03 ,[1 ,12 .UY .08 .U9 .|3 ,LU ,06 .00 ,U5 marked 159 2.36~7 .44 ,UG .It *(~ *07 .L3 ,U9 *Q~ .0~ .LO .IL tndl¢aclvl 20 Z.3b6U .~9 .04 .0~ .07 .o9 .|L .0~ ,LL' .09 .13 .|2 mtlder 46 Z.3667 .1~ .06 ,08 .13 .Lt ,~7 .09 ,10 .07 ,06 .09 ul*k 6S 2.36b~ .O7 .09 .0~ .O6 .L3 .4O .|| .10 .|! .O9 .~6 o(tln 389 2.36;6 *LI .05 .09 .10 .09 .07 .07 *11 .|3 .07 *LI st~L4 &6 Z.368| ,12 .11 .06 .09 *08.0Y .O7 .07 .1~ .08 .09 2 130 2.3687 .rJV .09 .O9 .10 .O6 .00 .09 .I~ .12 *05 .07 large 369 2.3701 .0~ .07 *|] .I)9 ,)3 *06 ,09 ,|0 ,07 .07 .11 causing Z56 Z,3706 .LO .06 .10 .tO .10 ,1~ .Iu ,06 .07 .{)9 .O7 i*verl ~Bq Z.37|| .06 .06 .12 .12 *10 .09 .ll .U7 .|4 .09 .09 [i¢1 425 Z.37L6 .09 .I0 ,I| .OS .13 .09 .08 .08 .06 .10 .tt vttflou¢ Zt6 Z.3716 .O9.0g .08 .08 .L! ,O9 .|3 .12 *09 .r)y .O5 tE 332 ~.37|8 .LO .oq .O9 .0~ .13 .IU .09 .05 .1( .07 .09 L,cce*ltn8 (2J 2.372~ .13 .LO .09 .LO .O3 .~}9 .~7 .0~ .11 .O8 .~9 for 596 Z.]727 .UZ .O7 .13 .07 .ll .12 .n6 ,06 .1l .09 .o~ thaa 396 Z.37~6 .06 .|1 .10 .O8 .0S .£o .11 .11 .~b .|1 .~)6 molt t78 Z.37~6 .07 .12 .13 .07 .It| .10 .;u .09 .nb .08 .~6 eich 30 Z.]766 .09 .0~ .09 .09 .07 .10 .00 .06 .LL .13 .Oq OOli¢ 67A 2.37~8 .IL .09 .07 .06 .07 .11 .06 .08 .07 .10 .L& accumaSat~oa 6i L]782 .U9 ,10 .U7 .0~ ,11 .42 .O7 .06 .1| .1! ,n7 poor 55 L3776 .U7 .11 .L~ .00 .u6 .09 .O6 .O6 .Lt .10 .07 more ]89 2.3780 .09 .09 ,10 .L2 .n9 .0~ .10 .|( .LO .Oh .07 plrltltln¢ L2t 2.3783 .10 .U9 .12 .03 .O8 .10 .O7 .O9 .lO .11 .09 mnd ~03 2.3792 .ub .¢9 .O9 .4U .U6 .11 .L~ .O9 .[2 .0~ .10 type 382 2.3793 .ub .09 .o8 ,07 .IU ,10 .~g .IU .L] .09 .09 =||ely ~|5 Z,179~ .08 .08 .06 .ou .u~ .12 ,LZ .10 .O9 .It .O7 vlrlabll 203 ~.379~ .O9 .0~ .00 .lO .12 .¢)8 .u's .O0 .0~ .13 .O8 casll 26O 2.JSUt .09 ,09 .10 ,07 .~19 .32 ,ZO .12 .07 *0~" .O7 frequen¢ 3|8 Z.J~15 ,Ub .tO .08 ,14 .~}9.0S .o; .0~ .IL .11 .It facet ~31 Z.3619 .08 .08 .tO .11 .[2 .O9 .03 .L0 .10 .09 .0b du¢lal 620 ~.~[ .u7 .Iu .|0 *|4 .I| .08 .4(' .06 .10 .t)9 .03 esge¢laL3y ]69 2.3U~7 .O6 .I1 .ll .u6 .O9 .:)8 .09 .11 .o~ .LO .10 ulullly L379 2.366/ ,IZ .10 .07 .|0 .07 ,07 ,09 .O9 .09 .11 .og $eneral 78 2.3~5 .II .09 .U9 ,G9 .09 .09 .07 .07 .08 .09 ,12 i~ 9HO Z.3663 .08 *|0 ,IU .09 ,O6 .08 .|0 .|0 ,07 .09 ,33 O@ 3206 2.3~8 .09 .O9.0B .O8 .u9 .IU .06 .11 .O7 .06 .IL fr~ 389 2.3~92 .09 .~g .O8.0B .~7 .LU ,i! .09 .41 ,07 .10 i|ter 516 2.J699 .O6 ,|| .10 ,O8 .09 .[}8 .10 .10 ,0~ .08 .1[ vlcn 3315 2.390Z .O9 .O9.0q .O8 ,09 .LO .O8 ,10 .07 .(~q .12 eirly )61 Z.3911 .06 ,II .10 ,06 .~)8 .0~ .I0 .[0 .O8 .09 .IL In 2B65 2.393~ .39 *LI .Oq .08 *08 .09 .O9 .IU .09.0S .ll by L~fl8 Z.39(9 .~9 ,1| .O9 .0~ .0~ .10 .O8 .09 .~)6 .09 ,11 ¢o~11 2[0~ L]93~ .07 .1~ ,10 *08 .LO .O9 .1~ .10 .08.0q .10 ot 1953 2.3950 .08 .LO .10 .09 .O8 .~ .09 .10 .+)q .09 .111 polltbly 2kOS ~,3955.0S .LO ,16 .~)9 .09 ,o~ ,t~9 .O9 .ng ,09 .lu to ZOU. Table 3. The highest 'entropy' words in CHIT. Note that these are conu~on English words. A :e4t} Pt) uo u~ u.9~Z~ Z3 .9e (23 0) 0.~500 53 .96 (53 1) u.9k9Z ZI .V6 (~l O) u.9671 2~ *gb (Z~ U) 0.9~70 21 .96 (2X ~) 0.~0 i9 .95 119 ~) 0.~2Z 27 *~? (Z7 U) 0.936~ 5Y ,91 (58 t) U.9380 33 .~1 (33 L) u.9321 27 .91 (27 ~) 0.9305 ~l *96 (~1 L) u.~30l L~ .~* (l~ 0) U.~Z~7 12 .95 (|7 O) u.9279 Lb .9~ (16 O) 0*9262 Ih .9~ (|b O) U*~2~7 21 .~6 (Zl o) U*92Ub Z7 .~3 (Zb U) ~.~19! 11 ,~2 (11 U) 0.912b J~ .94 (33 L) u*9|2b L~ .94 (lb o) U*906~ 19 .95 (19 O) 0*9U~6 11 .92 (It 0) 0.9036 ~ .9~ (Z~ u) 0.~U33 23 .~Z (22 O) 0.903Z ~l .~L (Zo O) 0.6992 Z7 *~3 (26 ~) 0.0965 21 .91 (20 O) 0,695~ ~ .90 (8 u) 0,6946 tz .93 (1~ O) U.6~12 Jo .~ (29 l) 0.8Y06 53 .93 (50 I) 0.890~ 9 *YL (9 u) U.8~t ~6 .93 (tJ 13 ~.8891 11 .92 (II u) 0,~886 L3 .Y3 (13 U) o.e061 23 .~2 (Z2 u) 0.6677 Z6 .~b (26 2) O.d6/b Z9 .~u 427 o) o.6667 Z9 .~u (27 O) u.~666 2Y .~7 (Z9 2) o.~a66 29 .97 (2~ 2) u.6663 L2 .~) (12 O) 0,~861 ~1 .~1 (20 o) U.6S33 23 ,¥2 (22 0) Q.6~)3 lU .~2 (tO O) o.6626 55 .91 4~L I) u.6~oe 7 .69 (P u) U*6793 30 .9L (26 U) O.d/b8 11 .~Z (|I O) 9,6733 ~ .90 (u o) 0.8;23 3k .~Y (31 0) et ut ej u) ~t~+j .~1 (25) .Ul (:31 VIal-carl .03 (IUJ) .OZ 1531 tnhlllttOe--¢Lv .u1 (Z~) ,0l (Z|) Illtl--¢urct¢i .01 (i6) ,+l (2~) ~lr{ull~OnInOtl .0~ (931 °02 (~) d¢lbttll+'Slml1~LiUl *U3 (L08) .Ul (33) per-¢u6~c .03 (lOU) .UL (Z/) ~*r-~mcl¢ .OJ (150) ,01 (&t) l©s'qtl *U| (23) .UU (|k) inl[niqte¢O¢ll .02 (bO) °O[ (I)) 81~¢~4dd31~1 *02 ($3) .UI (16) cZv-vlpo¢ .u2 (57) .U| (16) ocCU~l¢lOnll-Vl~r ,U~ (|0]) *O4 (~1) Cnhl[l¢tOfl,',~lihl¢lll .o! (33) .01 (221 ¢,bt¢-=eCl¢ .o0 (12) .go (It) lift-tamp .05 41501 *gl (19) Icg--p-¢ .UZ (56) .UU (LJ) bLo¢k-bund|e=.b¢lnch .u~ (iu3) .o! (2~) Iohl~lt[Ol~ll~+lJ{lCtMte ,~2 (~2) ,~1 {{{) (~lEa~ltl{[{qlpl{=~ .o~ (J~) ,~[ (Z~) ¢L~iihlfl{I .O0 (|i) .0U (6) ~l~n[i-b{~dlthl~{ .UO (l&) *UU (6) [llUeL~t--ll{nO~OptldSll .0J ([[U) .00 (12) {~i{{~E~oi{~ld .0k (116) .OZ ($3) ate-flY *03 (69) .Oi (46) *lElctl-~llpectld *03 11101 .UU (11) ~*llL-rhtnol¢O~y .04 (148) .OU (i3| jlundtct't|p¢ ,O} (10J) ,ul (Z3) inhllltlon--pereut4~eo~m .uz (50) .01 (zg) rnytn~llLop .02 (~J) .u! (29) civ~nutlcture .08 (Z6~) .01 (Z9) =oemtl-mec~m=yttc .OA (LJT) .uu (LZ) m=rrme-trythrotd *03 (1~) .01 (Z|) ~&lCt¢C--¢lChlEl~l *Ui (1161 .~{ (2]) d~r-plfCU¢lnlll<~ll *03 (108) *OU (|U) pi¢*[{[l~ .03 {Vb) .U~ (55) ¢tlCttOnl--ldvlrll .ul (Zb) *UU (71 b¢onchoIcopy--6¢ottcnoK¢lphy .u& (1161 .UL (S&) atr-ppm .U3 187) *U[ 130) glllrtC--tlVlll .05 (130) .OO (11) e¢8--b~ndll--O¢lO¢h .03 (ab) *UO (U) ~¢~¢--~0[01y1¢031¢ • ^ I+iLJ ELi Uo op -0.166| ILU .UL (U L2) -.4J.lOb] +£ .Ui (U LU) -U.IU39 lSU .~l (I 171 -U.IU|9 6~ .02 (U 7) -0.u~95 55 .u~ (U O) ~1.u~bY 53 .UZ (u b) -u.u~dZ 51 ,02 (U ~) ~.0976 5b .UZ (U ~) -O,OQ/6 69 .02 (U 5) • "U.0968 ~1 .02 (U 5) ','0,U¥43 Y3 *UI (U 9) -U.U¥38 41 .u2 (O ~) -U.U938 170 .U2 (J ZU) • .'O.UV3Z ~0 .02 (11 ~) -O.U~2b 80 oU| (U ~) -b.O~t~ ?3 .01 (U 2) -~.0907 36 .03 (0 ~) -(J.U~OU 35 .03 (0 ~) -O*UBgb 8~ ,OZ (U ~) -'~.0693 3~ .4)3 (O ~) -0.U887 60 .02 (0 6) -O.U~a5 ~3 .,~ (0 }) • "U*U~76 32 .03 (U 3) -~.U~15 56 .U2 (u ~) -0,O&TZ 5S .UZ (0 ~) • ,U.0872 b5 .U] (I 7) ~).UBb7 84 .U3 (L 7) -U.UBbI 97 .03 (2 ll) -0.U~67 31 .03 (0 3) -~.U~; 31 *UJ (U 3) -~J,0866 53 ,UZ (0 ~) -0.0886 ~ .02 (0 5) -0.u866 $3 .U2 (O 5) • "0.0~65 L29 .U3 (3 15) -0.0663 $2 .02 (0 5) -0.O~bL 95 *03 (2 Ll) -0.0058 30 *03 (U 3) • .~.uaS8 b2 .03 (L 7) -u.~$u 30 ,03 (O 3) -0.0~5~ 30 .03 (U 3) -U.U8§$ 50 .02 (U 5) -0.U~$$ SO .02 (O ~) -0.0~53 61 .U3 (1 7) -0.0~8 29 .03 (U 3) ~J*O86~ Z9 .03 (O 3) -~.O~k8 bU .03 (1 7) -O.U6~ 29 ,03 (0 3) -o.oa4a Z~ .u3 (0 3) -o.ua~ 2~ .03 (U 3) -U.U837 58 *U3 (L 6) -0.0~37 ZU .03 (0 3) '-0.U~37 28 .03 (O 3] -O.uu]? Zb .U4 (U 3) Pi ui P( uj ~:i-uJ .IZ (3~1) .03 (IIU) Bo,i-vlniriculat .lZ (381) .UJ (9() bone-v4~inil .12 (SH|) .05 (15t*) bone-(c; .L2 (36|) .HZ (64) 6one-ceivtx • 12 (]8l) .02 (55) 6Onm--$¢¢L¢¢u¢1 • 12 (3~L) .UZ (5]) oonl*¢ELI • IZ (]6[) .UZ (54} bone--pJ¢oxylaUb£ .|2 (36|) .UZ 450) ~ona-¢agt,e¢erzza¢lon • |~ (36L) .U2 (50) 0Qnl--¢hy¢6m • 12 13811 .U~ (kY) ~onm-Rilucoma • IZ (361) .O2 (~v) bonI-p • IZ (SUI) .U~ (&7) bone~dve .LU (~&l) .OJ (~3) uylpnll"epLaa¢=ll • |2 (~+[) .U1 (4|) 6one-<Ill • lZ (161) .0~ (LTU) 6one-clsh¢ • 12 (~6l) *U| ({U) OONV--I¢B¢L/¢¢y .|0 (34|) .U2 (6U) oylpflel--nltvII • lO (3~[) ,oz (73) ~ylpnU--lclJp • 12 (3~[) .+I (36) hoel-placenca • 12 (3011 .0L 135} 0oma-~t+lug .Iu (341) .U2 (b~) 0ylpnel-.ucethral .|2 (sKi) .UI (34) bone-eo¢£um • IU (34|) .U2 (80) dys~ne4-~L¢ • 12 (38L) .Ui (33) boue-c~11~ry .A2 ()d|) .01 (J2) oone-pulmo, x¢ • 1(I (S&|) .U2 (~o) dylpnel-~ypet~eta¢os~l .IU (361) .02 (5~) dylpneJ-knte • 12 (38|) .03 (05) botm-a¢¢Xal • 12 (~8|) .0& (64) Dune"~rech¢ll °|2 (38l) *03 (97) bonm-loundM • ~2 (~l) ,()l (~A) 6Ont--p@E¢ntuB • 12 (3&;) .Ol (3l) bo,e-ova~y • |U (S&l) .OZ (53) uyspnel-cylmolcopy • |U ()&l) *02 (~) dylpnll--d~Mk • 12 (~83) .04 (1~) 6oal-lc~eiy • |2 (3~1) .03 (~) bO~l-vin{E[{[I • 12 (J~l) .Ol (30) 6onl-lnstocl~dXo[rlpnY • LZ (~81) .UZ (62) oone-conJunccivi .lZ (J~l) .UI (30) 6onl-lelUl .LZ (JbA) .uL (~U) ~one-lxe¢¢~ol~l .IU (3&l) ,IJZ (50) dylpn~&-~en~l .lu (Jil) .o2 (SU} dyl~ll--6lnlv~Or • |Z ()UI) .UZ (hi) bone-.d~apncai~ .12 (381) .01 (29) bo~l--pupEl • |2 (~8|) .UZ (6U) boal~ave$ .LZ (]~l) .UI (29) 8oM-glLlbLadalr • 12 ()8|) .0| (29) bonl-sborclon .12 (38|) *UZ (56) bone-ure¢,rl • lZ (3l+L) .UL (28) bone-¢ou~uactivlL • ~2 (3811 *U| 423) bone-~llld • 12 ()61) *OL (2~) oonl--envLron~nt Table 5. The bottom of the word-pair list, showing the negativaly correlating words. Table 4. The top of the word-pair list in decreasing order of associaUion value (A). 151 1.2252 ~ 76 1. 3089 71 i..19u2 9 59 ..467: ~ 17 1.4E05 1 4(, 1.6940 2 33 . ;.6259 6 25 1. 6267 4 ~2 1.6619 6 58 i. 704.7 1 28 1.71.7.7 5 24 1.7209 0 2 < ) 1.'7246 2 19 1.730'7 ° 18 1.7441 ° 19 1..7511 10 39 1.'7590 2 25 I.'7619 O 1.7 1.7712 ;) 21 1.7619 1'7 98 1.7821 3 22 1.8192 2 26 1.83(18 10 24 1.8391 0 IG 1.8395 a 16 1.8420 4 4.7 1.8436 ° 1.7 1.0905 19 24 1.8521 1 19 1.0580 0 16 1.0~q.7 g 12 1.9U22 17 31 1.9209 82 29 1,9242 29 1.9251 0 13 1.9283 ° 14 1.9337 ~ 20 1.9339 15 1.9347 0 17 1.9407 I 3.7 1.9489 9 1.7 1.9~.24 4 23 1.9'701 J 21 1.9.795 U I0 1.9.775 5 19 1.9.78.7 4 3"7 I. 9796 2 16 1.~84] U 11 1.~?e ~ 17 1.~8.76 15 1.9S83 l 1~ 1.9926 I 13 1.9994 0 Ii 2.0008 ~ 13 2.0026 ~4 2.0032 6 29 u 0 ~ S 0 O 0 3 0 9 0 0 0 1 0 0 S 2 0 0 0 9 o ~ 1 0 0 0 2 0 1 U l 0 0 0 3 2 4 C 0 0 0 0 2 = 4 5 0 0 0 2 ~ 0 2 ~ 0 0 0 ° : 0 2 0 4 ° 0 e 3 4 0 0 1 " 5 1 1 e 2 A L a n 0 0 1 ) 1 g e 9 1 ~ O 0 1 0 9 L S O ~ 2 1 0 18 ~ 2 ~ 0 1 0 o l 0 0 0 9 ~ 0 O l O 0 3 g o 1 0 9 1 0 0 0 3 9 0 2 4 5 O ¢ 9 11 I ° ~ 0 ~ C 1 O O l n U 1 0 o 0 1 5 epAdermis 93 ~erm~s 95 ~mules ~G a~m~r~zs 44 ~ypor~erar.c.lsiJ~ 56 epAder=ai 49 ~uiel 31 seal Anq 41 ~lp "73 inwlutic~ 22 Fapuie 32 ~orny 21 • era¢in 19 s¢¢l¢u~ 21 eruPtian 54 corAu~ 34 cornm~m 16 melanin 26 ;~ruri~ 185 pustules 28 ~uil~e 39 solel 40 s~ales 19 nilR~le 19 ~nt lltra¢l .74 ;~r~te¢ltO~tl 21 palm 40 hypa rpiqlm~'.st ion 22 ~cis 19 ictlthy~s i s 12 S I~cche~ 54 c~c 16 xe¢lcosiJ 17 3 {o111c~lar 32 cneeu 21 reel 29 ci rc~mic~ipad 65 ct~ust~r~ 27 ~rm~ 44 s~eac )S m.dDepiderm~l 10 4 leant rq 34 pl~q,,.m~ 3? 3 sunl i~hc 2~ vet ruc~,s 4 SCaly 22 r idRel 25 hyperKera¢o¢~¢ 1.7 MAts 13 ec~m~ 21 nevus 28 ~JC~CM 38 1.5869 1.5848 2 1 4 0 g 1.~182 0 0 1 0 1 1.6338 2 0 1 0 1 1.6441 1 2 g 0 1 1.662'7 3 9 1 1 3 1.6686 I 3 I 0 3 1.6836 4 ° 0 1 1.6967 9 1,..7 : 9 I 2 ~ ..7~1 ] : : : ,o 1.'7445 1.7659 IS 2 1.'7051 I O 3 I g 1.=2~ 9 , : : : 1.|077 O 3 1.8149 1 0 1 9 0 1.8188 2.8410 2 9 l ° P 2.8424 3 1 ° 2 9 1.8M7 '7 O O 1 1 1.,2 1, ~ : o2 oo 2.OTiS O 1.8741 2 1 $ 1 9 1.875.7 ° 0 ° 1 e ° 1 1 2.899.7 1.8975 4 0 1.8991 1 O 1 l . I I 9 I 9 0 l. 5~9,e $ 1 5 0 S 1.91"22 9 O 9 O 9 1.91'72 0 0 9 g B 1.9172 O 9 ° ° 9 1.9224 4 S I ° 9 1.9238 ~ O 0 : 3 1.9634 9 0 1 1.9728 I ° 9 g 9 1.9'736 i 0 ° 1 9 1.9775 1.~812 ° 1 ° 0 e 1.9815 O 9 9 0 0 9 0 . 9 . 2 : : , 1.9888 2 2 0 1.9990 1 1 ° 0 1.9933 3 9 I O 0 2.~g~3 1 9 gg ~ O 2.9093 I ° 0 2.8099 O ° 0 ° 9 2.0119 2 2 9 1 O 44 0 1~9rlun 34 4 5 0 0 oolon 2.7 6 0 du=dsmal 40 O 9 9 0 du~m~m 34 21 0 0 0 par A ¢onLt LI .72 33 9 1 0 1 UNdld¢41flt 47 26 3 4 0 a bile 39 26 | 2 0 9 bi 1181'y 49 33 4 9 1 0 ml) A q4141¢ r tc O 14 0 0 9 0 94stroloo~y 15 11 9 1 0 0 ur¢~l Ill.pan 21 39 6 4 G 0 conic i peUon 78 22 9 ~1 9 0 . 0 : 0 :~:,.~'21" 14 1 0 0 0 ¢o1¢~11 ¢ 17 e 13 1 O 9 ° m 15 12 O 0 0 ° bep 16 13 O 0 9 g p/lot ic 16 ° 15 0 0 ° ° ~1110¢y 21 21 2 3 1 ° blllrul~n 42 , ~ , ~ 1 9,-. 58 9 0 9 pe¢i~rtall 10 ,, I I ' :..1 21 g 21 1 0 ~ S 3 0 4 9 SCOOP 35 .7 ~ 0 ~ , 9oA.--. N 16 0 @ 0 0 mnee.cet ic 1 15 1 1 0 9 pie i sr.allts 22 11 J g O 0 IIRpt 13 9 11 A O 9 8 pcoctmnc~q.v 12 33 4 2 ) 0 ir.=ol¢lnl 64 9 ° 0 O ° ct~oLlrlA ¢1s 9 ° 9 8 0 0 0 ¢hO I ecys r.Oc) tamy 9 9 0 O O U uOpMqOICOW 9 I 15 O O 2 9 an~Z 26 10 O 1 0 0 veL"| ¢tm 19 69 O : : 0 Ancrm¢lc 11 2 0 0 °°st rllC~Xmy 11 9 I I 0 I i hCulma~'~ 1 on 12 9 29 G 2 ° 0 loOpS 20 5 0 2 ° ° ~nopeq~ I dale 8 1 1 0 9 lt~a~|d 6 O 13 4 0 0 e aues~,cosa 24 11 I 1 e o i IrJr~ 17 8 0 0 I O ~lorr~tdr ta 15 , ,, 0 i ~ I .... ,~ 19 Ii 3 9 1 2 ~IYP~ 19 O 3 0 0 lun~¢al 19 ° g . ' iJ co11¢II Table 6. A word list generated algorinh- mically which constitutes a dermatological vocabulary. The disease category 'skin' is represented by the third colu~nn. Table 7. A word list qenerated algorith- mically which constitutes a vocabulary of gastroenterology. The eighth column represents ~he disease category 'digestive SySte~ t . 152
1980
40
REQUIREMENTS OF TEXT PROCESSING LEXICONS Kenneth C. Litkoweki 16729 Shea Lane, Gaithersburg, Md. 20760 Five years ago, Dwight Bolinger [1] wrote that efforts to represent meaning had not yet made use of the insights of lexico- graphy. The few substantial efforts, such as those spearheaded by Olney [2,3], MelOCuk [4], Smith [5], and Simmons [6,7], made some progress, but never came to fruition. Today, lexicography and its products, the diction- aries, remain an untapped resource of uncer- tain value. Indeed, many who have analyzed the contents of a dictionary have concluded that it is of little value to linguistics or artificial intelligence. Because of the size and complexity of a dictionary, perhaps such a conclusion is inevitable, but I believe it is wrong. To avoid becoming irretrievably lost in the minutiae of a dictionary and to view the real potential of this resource, it is necessary to develop a comprehensive model within which a dictionaryOs detail can be tied together. When this is done, I believe one can identify the requirements for a se- mantic representation of an entry in the lex- icon to be used in natural language processing systems. I describe herein what I have learned from this type of effort. I began with the objective of identifying primitive words or concepts by following definitional paths within a dictionary. To search for these, I developed a model of a dictionary using the theory of labeled di- rected graphs. In this model, a point or node is taken to represent a definition and a line or arc is taken to represent a derivational relationship between definitions. With such a model, I could use theorems of graph theory to predict the existence and form of primi- tives within the dictionary. This justified continued effort to attempt to find such primitives. The model showed that the big problem to be overcome in trying to find the primitives is the apparent rampant circularity of defining relationships. To eliminate these apparent vicious circles, it is necessary to make a precise identification of derivational re- lationships, specifically, to find the spe- cific definition that provides the sense in which its definiendum is used in defining an- other word. When this is done, the spurious cycles are broken and precise derivational relationships are identified. Although this can be done manually, the sheer bulk of a dictionary requires that it be done with well-defined procedures, i.e. with a syn- tactic and semantic parser. It is in the attempt to lay out the elements of such a parser that the requirements of semantic rep- resentations have emerged. The parser must first be capable of handling the syntactic complexity of the definitions within a dictionary. This can be done by modifying and adding to existing ATN parsers, based on syntactic patterns present within a dictionary. Incidentally, a dictionary is an excellent large corpus upon which to base such a parser. The parser must go beyond syntactics, i.e., it must be capable of identifying which sense of a word is being used. Rieger [8,9] has argued for the necessity of sense selection or dis- crimination nets. To develop such a net for each word in the lexicon, I suggest the poss- ibility of using a parser to analyze the def- initions of a word and thereby to create a net which will be capable of discriminating among all definitions of a word. The following requirements must be satisfied by such a parser and its resulting nets. Diagnostic or differentiating components are needed for each definition. Each definition must have a different semantic re~resent- ation, even though there may be a core mean- ing for all the definitions of a word. Since the ability to traverse a net successfully depends on the context in which a word is used, each definition, i.e. each semantic representation, must include slots to be filled b~ that context. The slots will pro- vide a unique context for each sense of a word. Context is what permits disambiguation. Since the search through a net is inherently complex, a definition must drive the parser in the search for context which will fill its slots. These notions are consistent with RiegerOs; however, they were identified in- dependently based on my analysis of dictionary definitions. Their viability depends on the ability to describe procedures for developing a parser of this type to generate the desired semantic representations. AS mentioned before, observation of syntactic patterns will lead to an enhancement of syn- tactic parsingl to a limited extent, the syn- tactic parser will permit some discrimination, e.g. of transitive and intransitive verbs or verbs which use particles. Further procedures for developing semantic representations are described using the intransitive senses of the verb mchange" as examples. Procedures are de- scribed for (I) using definitions of preposi- tions for identifying semantic cases which will operate as slots in the semantic repre- sentation, (2) showing how selectional re- strictions on what can fill such slots are derived from the definitional matter, and (3) identifying semantic components that are present within a definition. It is pointed out how it will eventually be necessary that these representations be given in terms of primitives. Procedures are described for building discrimination nets from the results of parsing the definitions and for adding to these nets how the parser should be driven. The emphasis of this paper is in describing procedures that have been developed thus far. Finally, it is shown how these procedures are used to identify explicit derivational rela- tionships present within a dictionary in order to move toward identification of primitives. Such relationships are very similar to the lexical functions used by NelOCuk, except that in this case both the function and the argument are elements of the lexicon, rather than the argument alone. 153 It has become clear that semantic represent- ations of definitions in the form described must ultimately constitute the elements out of which semantic rapresentatlons of multi- sentence texts must be created, perhaps with twO fool: (I) describing entities (cantered around nouns) and (2) describing events (centered around verbs). If multisentence texts can then be studied empirically, the structure of ordinary discourse will then be based on observations rather than theory. Although this paradigm may seem to be in- credibly complex, I believe that it is nothing more than what the lexicons of pre- sent AI systems are becoming. I believe that more rapid progress can be made with an ex- plicit effort to exploit and not to duplicate ~he efforts of lexicographers. REFERENCES I. Solinger,D°, Aspects of Language, 2rid ed., Ear¢ourt Brace Jovanovich, Znco, New York, 1975, p.224. 2. Olney,J., C.Revard, and P.Ziff, Toward the Developmen~ of Computational Aids for Obtaining a Formal Semantic Description of English, SP-2766/001/00, System Development Corpora~ion, Santa Monica, California, 1 October 1968. 3. Olney,J. and D.Rameey, QFrom machine- readable dictionaries to s lexicon taster: Progress, plans, and an offer," Computer Studies in the Humanities and Verbal Behavior, Vol.3, NO.4, November 1972, pp. 213-220. 4. NeleCuk,I.A°, tA new kind of dictionary and its role as a core component of auto- matlc text processing systems," T.A. Znformatlone, 1978, No.2, pp.3-8. 5. Smith,RaN°, "Znteractive lexicon updating," Computers and the Humanities, vol°6, No.3, January 1972, pp. 137-145. 6. Simmone,R.F. and R°AoAmeler, Modelln~ Dictionary Data, Computer Science Depart- ment, University of Texas, Austin, April 1975. 7. S£mmone,R.F. and w.P.Lehmann, A Proposal to Develop a Computational Methodology for Deriving Natural Language Semantic Struc- tures via Analysis of Machine-Readable Dictionaries, University of Texas, Austin, 1976 (Research proposal submitted to the National Science Foundation, Sept.28,1976). 8. Ringer,Co, Viewing parsin~ as War d Sense Discrimination, TR-511, Department of Com- puter Science, University of Maryland, College Park, Maryland, January 1977. 9. Rieger,C. and S.Small, Word Expert Parsing, TR-734, Department of Computer Science, University of Maryland, College Park, Maryland, March 1979. 154
1980
41
Chronometric Studies of Lexical Ambiguity Resolution Mark S. Seidenberg University of Illinois at Urbana-Champaign Bolt, Beranek and Newman, Inc. Michael g. Tanenhaus Wayne State University Languages such as English contain a large number of words with multiple meanings. These words are commonly termed "lexlcal ambiguities", although it is probably more accurate to speak of them as potentially ambiguous. Determining how the contextually appropriate reading of a word is identified presents an important and unavoidable problem for persons developing theories of natural language processing. A large body of psycholingulstlc research on ambiguity resolution has failed to yield a consistent set of findings or a general, non-controverslal theory. In this paper, we review the results of six experiments which form the basis of a model of ambiguity resolution in context, and at the same account for some of the contradictions in the existing literature. This work has three foci. The first is that we consider the lexlcal structure of words with multiple meanings, that is, relations among the meanings which presumably govern their representation in memory, and their access in context. Second, we attempt to characterize the structure and content of the llngulstlc context in which an ambiguous word occurs. It is clear that the llstener/reader uses context to compute the correct reading of a word; however, contexts provide different types of information which may be utillzed in different ways. Third, we consider real-time aspects of ambiguity resolution as it occurs in people, using a methodology that permits us to evaluate successive stages in processing. Relations among the meanings of ambiguous words vary along several dimensions. The component readings may be semantlcally related (the senses of GRASP in "to grasp a baseball" and "to grasp an idea") or semantically unrelated (e.g., the meanings of TIRE related to "sleeplng" and "wheel"). This dimension underlies the traditional distinction between polysemy and homonymy [Lyons, 1978].(I) The number of component readings also varies. The readings of a word can fall into different grammatical classes (e.g., the "sleep" reading of TIRE is a verb, the "wheel" reading a noun) or the same class (the meanings of STRAW related to "sipping" and "hay" are both nouns). The readings may be used approximately equally often in the language (e.g., WATCH) or they may be of unequal frequency (e.g., PEN, COUNT). Our research is concerned with homonymous words with two common readings of approximately equal frequency. Contexts provide several different types of information which are utilized in resolving amblgulty.(2) In example [I], the context provides syntactic information that i. John began to tire. favors the verb reading of the ambiguous word TIRE, and blocks the alternate noun reading. Syntax can function in this way only for ambiguous words with readings that fall into different gr-m.mtical classes. In [2], syntax 2. A doctor removed Henry*s damaged organ. is neutral with respect to the alternate readings of ORGAN (because both are nouns), but a word in the context ("doctor") is highly semantically related to one reading, and thus favors it; the alternate reading is not blocked, but merely implausible in the absence of any further information. The appropriate reading of DECK in [3] is 3. John walked on the deck. indicated by a different means, which eLight be termed pragmatic. The perceiver knows that a person is much more likely to walk on the surface of a ship than on the surface of a pack of playing cards. Other types of contextual information can be brought to bear on ambiguity resolution as well. For example, [4] is disamblguated by exploiting ~ass noun/count noun information; [5] might be disamblguated by applying knowledge of a stereotyplc situation (a script or frame; Schank & Abelson, 1977; Minsky, 1975). 4. Henry wanted a straw. 5. John avoided the check. Extended contexts frequently contain multiple sources of dlsamblguatlng information. Leaving aside vague or misleading cases, it is clear that all of these types of information yield the same outcome, assignment of the contextually-approprlate reading of a word. We sought to determine whether they produced this effect by the same means. Broadly speaking, there are two alternative mechanisms by which the correct reading could be assigned. The perceiver could access all of the common readings of the word in parallel, and use contextual information to perform a subsequent selection. This alternative --traditionally termed "multiple access'--holds that while the perceiver usually is aware of only a single reading, there is transient subconscious activation of others as well. The other posslbillty--"selectlve access"--is that contexts restrict lexlcal access to the single appropriate reading. Both of these alternatives have been supported by experimental evidence. The time course of processing events is evaluated by using a variable stlmulus onset asynchrony (SOA) priming methodology [Warren, 1977]. The subject bears a sentence that is followed by the presentation of a single word on a screen. Latency to read the word aloud is used to diagnose the availability of alternate word senses. For example, sentence [1] above favors the verb reading of TIRE. If subjects access that meaning, they should be faster to read the semantically-related target word SLEEP than when it follows an unambiguous, unrelated control sentence (e.g., "John began to leave"). However, if subjects also access the contextually inappropriate reading of TIRE, faster naming latencles will be observed for a word related to it (e.g. WHEEL) as well. Similar considerations hold for [6], in which the context favors the noun reading of TIRE. 6. John bought the tire. Changes in the availability of alternate readings over time can be tracked by presenting targets at a variable time interval following the ambiguous word or its control. In our experiments, targets appeared at a delay of either 0 or 200 msec. The first experiment (Tanenhaus, Leiman and Seldenberg, 1979) examlned the resolution of noun-verb (N-V) ambiguities such as TIRE in syntactic frames such as those in [i] and [6]. The results were clear: ar 0 msec SOA, targets related to both the appropriate and inappropriate readings showed faster naming latencles than controls. With a 200 msec delay interposed between ambiguous word and target, however, only targets related to the contextually appropriate reading showed facilitation. The results indicated that syntactic information in the context did not restrict lexlcal 155 access to a single reading, but instead permitted a rapid selection between alternatives. Thls occurred despite the fact that the context made it impossible to derive a coherent interpretation of the utterance using the alternate readlng.(3) Seidenber8, Tanenhaus and Leiman [1980] found largely the same pattern of results wlth noun-noun (N-~) ambiguities such as ORGAN or STRAw and contexts such as [7], which were neutral 7. John removed the organ. with respect to alternate readings of the ambiguous word. At 0 msec $OA, targets related to both readings showed facilitation, as might be expected since the context did not favor either one. At 200 nsec $OA, however, facilitation occurred on approximately half the trials, which would result if listeners had retained only one reading of the ambiguous word on each trial.(&) The pattern of results was similar to that in the Tanenhaus et al. (1979) study of syntactic contexts: multiple access, followed by avilabillty of only one reading 200 msec later. However, the underlying processes were quits different. In the syntactic frames study, listeners accessed multiple readings and used the context to select the appropriate one. In the Seldenberg e~ al. (1980) study, listeners accessed multiple readings but the context could not be used to perform a selection. They nonetheless assigned a default value within 200 msec. The results suggest that ambiguity resolution is subject not only to constraints imposed by the nature of the context, but also to llmitatlons of time. Subjects avoid carrying multiple readings longer than 200 msec even when contexts do not unambiguously isolate one. The experiment was designed so that at the moment the ambiguous word occurred, they had no reason to believe that disambtguating information would not be forthcoming. Under this circumstance, they might have been expected to retain multiple meanings. Instead, subjects assigned their best guess, risking the possibility that subsequent re-processlng would be necessary. It appears that reprocessing imposes less of a burden on the processing system than that associated with retaining multiple readings over time. In another experiment, Seidenberg etal. (1980) examined the effects of biasing semantic information on N-N ambiguities in contexts such as [8]. 8. The farmer removed the straw. As in [2], the context contains a word semantically-related to one meaning of the ambiguous word; syntactic information is neutral. These contexts produced selective access: for each item, only the target related to the contextually-appropriate reading of the ambiguous word showed facilitation; the target related to the inappropriate reading showed naming latencles comparable to those in the unrelated control. These outcomes held at both SOAs. Although N-N amblgu/tles produced multiple access in the previous experiment with neutral contexts, the biasing contextual information in thls experiment affected the initial access of meaning. Ne suggested such contexts Rrlme one reading of the ambiguous word, in the sense of Collins and Loftus (1975), Meyer and Schvaneveldt (1975), Warren (1977) and others. The readings of an ambiguous word are assumed to be coded in memory in terms of relative activation levels which reflect frequency and recency of use. A word or phrase semantically-related to one reading produces a transient increase in its activation level, possibly through a spreading activation process (Collins & Loftus, 1975). The readings are accessed in order of relative actlvatlon; the primed reading is accessed first, and assigned on-llne.(5) As noted above, N-N ambiguities can be resolved by using other types of information, e.g. pragmatic, mass noun/count noun, etc. These differ from the priming contexts used in the previous experiment because they do not contain any words or phrases semantically or assoctattvely related to a reading of the ambiguous word. In this way they are comparable co the syntactic contexts of the first experiment. The fourth experiment compared the use of non-priming contextual information in the resolution of N-N and N--V ambiguities. Again the variable S0A methodology was used, with targets appearing at 0 and 200 msec delays. The results in both the N-N and N-V conditions replicated those of our first experiment, showlng multiple access at 0 msec, followed by availability of only a single readlng 200 msec later. The experiments to this point can be summarized as follows. There appear to be two classes of contexts that have very different effects on ambiguity resolution. Priming contexts contain words or phrases semantically or assoclatively related to one reading of an ambiguous word. They increase the activation level of the reading before It is encountered through a non-directed, automatic process. In this way, they can alter the order in which readings are evaluated. These effects are lntra-lexical (Forster, 1979), solely due to tnterconnecttons among nodes in semantic memory. Non-priming contexts include various types of information--syntactic, pragmatic, and others--which require access of gr-m-mttcal knowledge and knowledge of the world. The word recognition process yields one or more readings of the ambiguous word to be evaluated against the demands imposed by these contexts. The number of readings accessed and the order in which they are evaluated depends upon their relative activation levels, which any be altered by priming. In experiment five, we tested an implication of the priming hypothesis. Recall that N-V ambiguities yleld multiple access, as do N-N ambiguities, except when the latter occur in priming contexts. Clearly, thls suggests that N-V ambiguities might also produce selective access If the context contained a priming word or phrase, as in [9]. 9. The nearsighted timekeeper dropped his watch. Thus, we compared the processing of ~-N and N-V ambiguities in priming contexts. The N-N results replicated those of the Seldenberg et al. (1980) experiment, selective access. The noun-verb conditions, however, continued to show ~ulglple access. Because the result was unexpected, we undertook a replication; it too showed thls pattern. The results of this series of experiments are summarized in Table I. We found no evidence that listeuers could use their knowledge of a language and knowledge of the world to restrict access to a single reading, at least for the class of ambiguous words with two common readings. Although these types of Information can facilitate the immediate pronesslng of a word (as demonstrated by Marslen-Wilson and Tyler, 1980), they do not influence the activation of word senses. It w~s suggested that the latter could be affected only by p~lming; however, the status of thls hypothesis is In doubt. Twice we observed selective access for N'N ambiguities in priming contexts; twice w- falle~ to obtain selective access with N-V ambiguities in similar contexts. Thls forces us to conclude that priming affects nouns differently than verbs, and strongly suggests that theories of lexlcal memory and recognition must begin to take into account the syntacElc functions of worr . 156 Table I Type of Context Type of Ambiguous Word 1,3. syntactic N-V 2. neutral N-N 3,5. priming N-N 4. norr-primlng bias N-N 5,6. prlmlng N-V Outcome multlple--->selectlon multlple--->selectlon selective access multlple--->selectlon multlple-=->selectlon References Collins, A.M. and Loftus, E.F. A spreadlng-actlvatlon theory of semantic processing. Psychological Review, 1975, 82, 407-428. Forster, g.I. Levels of processing and the structure of the language processor. In W.E. Cooper and E.C.T. Walker (eds.), Se..__nntence pr_oc__.eeessln~:Studles presented to Merrill Garrett. LEA, 1979. Lyons, J. Semantics. Cambridge University Press, 1978. Marslen-Wilson, W.D. and Tyler, L.K. The temporal structure of spoken language understanding. Co~nltion, 1980, 8, 1-71. Meyer, D. and Schvaneveldt, R. Meaning, memory, structure, and mental processes. In C.N. Cofer (ed.), The structure of human memory. Freeman, 1975. M/risky, M. A framework for representing knowledge. In P. Winston (ed.), The psychology of computer vision. McGraw-Hill, 1975. Schank, R. and Abelson, R. Scripts, plans, goals and understanding. LEA, 1977. Seldenberg, H., Tanenhaus, M. and Leiman, J. The time course of lexlcal ambiguity resolution in context. Center for the Study of Reading Tech Report #164, 1980. Tanenhaus~ M., Lelman, J. and Seldenberg, M. Evi- dence for multiple stages in the processing of ambiguous words in syntactic contexts. J.Verbal Learning and Verbal Behavior, 1979, 18, " ~ . Warren, R. Time and the spread of activation in memory. J. Experimental Psychology: Human Learnln~ and Memory, 1977, ~, 458-466. Footnotes This research was supported by the National Institute of Education under Contract No. US-NIE-C-400-76-OII6 to the Center for the Study of Reading, and by a Wayne State U. research development award. i. Of course,a word can have semantlcally-dlstlnct readings that are themselves polysemous. 2. These distinctions among types of context are not intended to prejudge any theoretical issues, only to facilitate exploratory research. 3. It should be noted Chat a large number of sentences were utilized, and that precautions were taken to ensure that the experimental procedure itself would not induce subjects to access meanings they would otherwise ignore. 4. For details, see the cited reference. Essentially, the experiment included control conditions which provided estimates of the amount of facilitation that would occur if either both readings or no readings were accessed on every trial. At 200 msec SOA, the amount of facilitation was almost exactly halfway between these two figures, suggesting that only one reading was available. 5. The data are unclear as to whether activation of the alternate reading is entirely suppressed, or merely delayed. 157
1980
42
Real Reading Behavior Robert Thibadeau, Marcel Just, and Patricia Carpenter Carnegie-Mellon University Pittsburgh, PA 15213 Abstract The most obvious observable activities that accompany reading are the eye fixations on various parts of the text. Our laboratory has now developed the technology for automatically measuring and recording the sequence and duration of eye fixations that readers make in a fairly natural reading situation. This paper reports on research in progress to use our observations of this real reading behavior to construct computational models of the cognitive processes involved in natural reading. In the first part of this paper we consider some constraints placed on models of human language comprehension imposed by the eye fixation data. In the second part we propose a particular model whose processing time on each word of the text is proportional to human readers' fixation durations.t Some Observations The reason that eye fixation data provide a rich base for a theoretical model of language processing is that readers' pauses on various words of a text are distinctly non-uniform. Some words are looked at very briefly, while others are gazed at for one or two seconds. The longer pauses are associated with a need for more computation [2]. The span of apprehension is relatively small, so that at a normal reading distance a reader cannot extract the meaning of words that are in peripheral vision [6]. This means that a person can read only what he looks at, and for scientific texts read normally by college students, this involves looking at almost every word. Furthermore, the longer pauses can occur immediately on the word that triggers the additional computation [4]. Thus it is possible to infer the degree of computational load at each point in the text. The starting point for the computer model was the analysis of the eye fixations of 14 Carnegie-Mellon undergraduates reading 15 passages (each about 140 words long) taken from the science and technology sections of Newsweek and Time magazines (see the Appendix for a sample passage). The mean fixation duration on each word (or on larger, clause-like sectors) of the text were analyzed in a multiple regression analysis in which the independent variables were the structural prcperties of the texts that were believed to affect the fixation durations. The results showed that fixation durations were influenced by several levels of processing, such as the word level (longer, less frequent 1This research was supported in part by grants from the Alfred P. Sloan Foundation. the National Institute of Education (G-79-0119) and the National institute of Mental Health (MH-29617) words take longer to encode and lexically access), and the text level (more important parts of the text, like topics or definitions take longer to process than less important parts). This analysis generated a verbal description of a model of the reading process that is consistent with the observed fixation durations. The details of the data, analysis, and model are reported elsewhere [5]. Some of the most intriguing aspects of the eye-fixation data concern trends that we have failed to find. Trends within noun phrases and verb phrases seem notable by their absence. Most approaches to sentence comprehension suggest that when the head noun of a noun phrase is reached, a great deal of processing is necessary to aggregate the meanings of the various modifiers. But this is not the case. While determiners and some prepositions are looked at more briefly, adjectives, noun-classifiers, and head nouns receive approximately the same gaze durations. (These results assume that word length effects on gaze duration have been covaried out). Verb phrases, with the exception of modals, show a similar flat distribution. It is also notable that verbs are not gazed at longer than nouns, as might be expected. Such results pose an interesting problem for a system which not only recognizes words, but also provides for their interpretation. Anotl"ler interesting result is the failure to find any associations with length of sentences (a rough measure of their complexity) or ordinal word position within sentences (a rough measure of amount of processing). That is to say, whether or not word function, character-length or syllables, etc., are controlled, there are no systematic trends associated with ordinal word position or sentence length. There is an added gaze duration associated with punctuation marks. Periods add about 73 milliseconds, and other punctuation (including commas, quotes, etc.) add about 43 milliseconds each above what can be accounted for by character-length or other covariates. The Framework The strategy for making sense of these and other similar observations is to develop a computational framework in which they can be understood. That framework must be capable of performing such diverse functions as word recognition, semantic and syntactic analysis, and text analysis. Furthermore, it must permit the ready interaction among processes implied by these functions. The framework we have implemented to accomplish these ambitious goals is a production system fashioned closely after Anderson's ACT system [1]. Such a production system is composed of three parts, a collection of productions comprising knowledge about how to carry out processes, a declarative knowledge base against which those processes are carried out, and an interpreter which provides for the actual behavior of the productions. 159 A production written for such a system is a condition-action pair, conceptually an 'if-then' concept, where the condition is assessed against a dynamically changing declarative know~edge base. If a condition is assessed as true (or matcheLl), the action of the production is taken to alter the knowJedge base. Altering the knowledge base leads to further potential for a match, so the production system will naturally cycle from match to match until no further productions can be matched. The sense in which processing is ¢otemporaneous is that all productions in memory are assessed for a match of their conditions before an action is taken, and then all productions whose. conditions succeed take action before the match proceeds again. This cycling, behavior provides a reference in establishing the basic synchrony of the system. The mapping from the behavior of the model to observed word gaze durations is on the basis of the number of match (or so-called recognition.act) cycles which the model requires to process each word. The physical implementation of the model is equipped at present to handle a dependency analysis of sentences of the sort of complexity we find in our texts (see the Appendix). There is nothing new to this analysis, and so it is not presented here. The implementation also exihibits some elementary word recognition, in that, for a few words, it contains productions recognizing letter configurations and shape parameters. The experience is, however, that the conventions which we have introduced provide a thoroughly 'debugged' initial framework. It is to the details of that framework that we now turn. Much of our initial effort in formulating such a parallel processing system has been concerned with making each processing cycle as efficient as possible with respect to the processing demands involved in reading to comprehend. To do this we allow that any number of productions can fire on e single cycle, each production contributing to the search for an interpretation of what is seen. Thus, for instance, the system may be actively working on a variety of processing tasks, and some may reach conclusion before others. The importance of concurrent processing is precisely that the reader may develop htPotheses in actively pursuing one processing avenue (such as syntax), and these hypotheses may influence other decisions (such as semantics) even before the former hypotheses are decided. Furthermore, hypotheses may be developed as expectations about words not yet seen, and these too should affect how those words are in fact seen. In effect, much of our initial effort has been in formulating how processes can interact in a collaborative effort to provide an interpretation. Collaboration in single recognition-act cycles is possible with carefully thought out conventions about the representation of knowledge in the knowledge base. As in ACT, every knowledge base element in our model is assigned a real.number activation level, which in the present system is regard d as a confidence value of sorts. Unlike ACT, the activation levels in our model are permitted to be positive or negative in sign, with the interpretation that a negative sign indicates the element is believed to be untrue. Coupled with this property of knowledge base elements are threshold properties associated with elements in the condition side of the productions. A threshold may be positive or negative, indicating a query about whether something is true or false with some confidence. As the system is used, there is a conventional threshold value above which knowledge is susceptible to being evaluated for inconsistency or contradiction, and below which knowledge is treated as hypothetical, in the examples below, this conventional threshold value is assumed. The condition elements can also include absence tests, so the system is capable of responding on the basis of the absence of an element at a desired confidence. Productions can also pick out knowledge that is only hypothetical using this device. But more importantly confidence in a result represents a manner in which productions can collaborate. The confidence values on knowledge base elements are manipulated using a special action called <SPEW>. Basically, this action takes the confidence in one knowledge-base element and adds a linearly weighted function of that confidence to other knowledge.base elements, If any such knowledge-base element is not, in fact, in the knowledge base, it will be added. The elements themselves can be regarded as propositions in a propositional network. Thus, one can view the function of productions as maintaining and constructing coherent fields of propositions about the text. Network representations of knowledge provide a natural indexing scheme, but to be practical on a computer such an indexing scheme needs augmentation. The indexing scheme must do several things at once. It must discriminate among the same objects used in different contexts, and it must also help resolve the difficult problem of two or more productions trying to build, or comment upon, the same knowledge structure concurrently. To give something of the flavor of the indexing scheme we have chosen: where other natural language understanding systems may create a token JOHN24 for a type JOHN, the number 24 in the present system does not simply distinquish this 'John' from others, it also places him within a dimensional space. In the exarnpies to follow the token numbers are generated for the sequential gazes, 1 for the first and so on. An obvious use of such a scheme is that several productions may establish expectations regarding the next word. If some subset of the productions establish the same expectation, then without matching they will create the properly distinguished tokens for that expectation. Consider one production written for this system: ((!WORD :IS !DETERMINER) --> (.'PEW) from (WORD :IS OETERMINER) to (WORD :HAS (<TOK> DETERMINER-TAIL)) (DETERMINER-TAIL :HAS (<TOK> WORD-EXPECTATION)) (WORD-EXPECTATION :IS (<NEXTTOK) WORD))) This production might be paraphrased as "lf you see some particular word (say WORD12) is some particular determiner (say THE), then from the confidence you have that that word is that determiner, assign (arithmetic ADD) that much 160 confidence to the ideas that that word a) needs to modify something (has a determiner-tail, DETERMINER-TAIL12), b) the modification itself has a word expectation (say WORD-EXPECTATION12), c) which is to be fulfilled by the next word seen (WORD13). The indexing scheme is manifest in the use of the functions <TOK> and <NEXTTOIC,. It is important to be able to predict what a token will be, since in a parallel architecture several productions may be collaborating in building this expectation structure. Type-token and category membership searches are usually carried out within the interpreter itself. The exclamation point prefix on subelements, as in !WORD above, causes the matcher to perform an ISA search for candidate tokens which the decision The matcher is itself dynamically altered with respect to ISA knowledge as new tokens are created, and by explicit ISA knowledge manipulation on the part of specialized productions. This has certain computational advantages in keeping the match process efficient 2. The use of very many tokens, as implied by the above example, is important if one wants to explore the coordination of different processes in a parallel architecture. The next production would fire if the word following the determiner were an adjective: ((IWORD :HAS IDETERHINER-TAIL) (DETERMINER-TAIL :HAS IWORO-EXPECTATION) (WORD-EXPECTATION :IS IIWORD) (%WORD :IS IADJECTIVE) --> (<SPEW> from (WORD-EXPECTATION :IS IWORO) to (WORD-EXPECTATION :IS 1WORD) -I (WORD-EXPECTATION :IS (<NEXTTOK> WORD))) The number prefixes, as in "1WORD", are tokens local to the production that just serve to indicate different knowledge base tokens are sought not what their knowledge base tokens should be. This production says that if a word has a determiner tail expecting some word and that word has been observed to be an adjective, then bring the confidence at least to 0.0 that the word-expectation is the adjective, and have confidence that the word-expectation is the word following the adjective. The <SPEW> action of this production makes use of a weighting scheme which serves to alter the control of processing. In this framework any knowledge base element can serve as both a bit of knowledge (a link) and as a control value. The .1 number causes the confidence in the source of the spew to be multiplied by -1 before it is added to the target, (WORD-EXPECTATION :IS 1WORD). If this were the only production requesting this switch of confidence, the effect would be the effective deletion of this bit of knowledge from the knowledge base. If other productions were also switching this confidence, the system would wind up being confident that this word-expectation association is indeed not the case (explicitly false). Processes in Sequence The primary interest in formulating a model is in having as much 'processing' or decision-making as possible in a single recognition-act cycle. The general idea is that an average gaze duration of 250 milliseconds on a word represents few such cycles. The ability of the model to predict gaze duration, then, depends upon the sequential constraints holding among the collection of productions brought to the interpretation process. The 'determiner tail' productions illustrated above represent a processing sequence in most contexts; the second cannot fire until the first has deposited its contribution in the knowledge base. This is not a necessary feature of these two productions, since other productions can collaborate to cause the simultaneous matching of the two productions illustrated (we assume these are easy to imagine). However, one may note that since the 'determiner tail' productions are distributed over several word gazes, they at most contribute one processing cycle to the gaze on any word (besides the determiner). Thus, sequencing over words may not be expensive. Let us consider where it is computationally expensive. In contrast to rvghtward looking activities, the presence of strong sequencing constraints among productions is potentially costly in leftward looking activities. To illustrate how such costs might be reduced, consider a production with a fairly low threshold which assigns a need to find an agent for an action-process verb, and another production which says that if one has an animate noun preceding an action-process verb and that animate noun is the only possible candidate, then that animate noun is the agent. These two productions are likely to fire simultaneously if the latter one fires at all. They both create a need to find an agent and satisfy that need at once. They do not set word • expectations simply because the look-back at previous text tries to be efficient with regard to sequencing constraints. Had the need not been immediately fulfilled, it would serve as a promotion of other productions which might find other ways of fulfilling it, or of reinterpreting the use of the action-process verb (even questioning the ISA inference). It should be noted that the natural device for keeping these further productions in sequence from firing is having them make the absence test, as in ((!WORD :IS IACTION-PROCESS-VERB) (WORD :HAS ]AGENT) (<ABSENT> (AGENT :IS ]ANYTHING)) --> ...suggest this might be an imperative, passive, el] ipse, etc.) The interpretation of the production is that "if you know with confidence that you have an action-process-verb and it needs an agent, but you don't know what that agent is, then suggest various reasons why you might not know with appropriately low confidence in them." 2The matcher is a slightly altered form of the RETE Matcher written by Forgy for OPS4 [3]. 161 Coordination of Mind and Eye The basic method of coordinating eye and mind in the present model is to make getting the next word contingent upon having completed the processing on the present one. In a production system architecture, this simply means that the match fails to turn up any productions whose conditions match to the knowledge base. Since elements in the knowledge base specify the need-to-know as wel: as what is known, the use of absence tests in the conditions of productions can 'shut off' further processing when it is deemed to be completed, or simply deemed to be unnecessary. It is by this device that the system demonstrates more processing on important information, 'shutting off' extended processing on that which is deemed, for any number of reasons, as less important. The model must, in addition to various ideas about coordination, be also capable of representing various ideas about dis-coordination. One potential instance of this in the present data is that while virtually every word is fixated upon at least once (recall that several fixations can count toward a single gaze), there are some words, AND, OR, BUT, A, THE, TO, and OF, with some likelihood of not being gazed upon at all (this accounts in some part for the fairly low average gaze duration on these words). This can be considered a dis-coordination of sorts, since to be this selective the reader must have some reasonable strong hypotheses about the words in question (the knowledge sources for these hypOtheses are potentially quite numerous, including the possibility of knowledge from peripheral vision). A production to implement this dis-coordination in the present system is: ((!WORD :IS IFREQUENT-FUNCTION-WORD) --> (<SPEW> ((<OLOTOK) GOAL) :IS INTERPRET-WORD) ((<OLDTOK> GOAL) :IS INTERPRET-WORD) -1 ((<OLDTOK> GOAL) :IS GAZE-NEXT-WORD))) This production detects the presence of one of the above function words, and immediately shifts the present goal of interpreting a word (if it happens to be that) to gazing upon the word following the function word. It is important to recognize that the eye need not be on the function word for the system to know with reasonable confidence that the next word is a function word. The indexing scheme permits the system to form hypotheses strong enough to create effective reality (e.g., peripheral information and expectations can add up to the conclusion that the word is a function word). A second important property is that the system does not get confused with such skips, or in the usual case with such brief stays on these words. The reason again is because each word becomes a sort of local demon inheriting demon-like properties from general production, and by interaction with other knowledge base elements through the system of productions. Summary This report has provided a brief description on work in progress to capture our observations of reading eye-movements in computational models of the reading process. We have illustrated some of the main properties of reading eye-movements and some of the main issues to arise. We have also illustrated within an implemented system how these issues might be addressed and explored in order to gain insight into more precise queries about real reading behavior. Appendix An example text: Flywheels are one of the oldest mechanical devices known to man. Every internal-combustion engine contains a small flywheel that converts the jerky motion of the piston into the smooth flow of energy that powers the drive shaft. The greater the mass of a flywheel and the faster it spins, the more energy can be stored in it. But its maximum spinning speed is limited by the strength of the material it is made from. If it spins too fast for its mass, any flywheel will fly apart. One type of flywheel consists of round sandwiches of fiberglas and rubber providing the maximum possible storage of energy when the wheel is confined in a small space as in an automobile. Another type, the "superflywheel", consists of a series of rimless spokes. This flywheel stores the maximum energy when space is unlimited. References 1. Anderson, J. R. Language, memory, and thought. Lawrence Erlbaum Associates, 1976. 2. Carpenter, P. A., & Just, M. A. Reading comprehension . as the eyes see it. In Cognitive Processes in Comprehension, M. A. Just & P. A. Carpenter, Eds., Lawrence Erlbaum Associates, 1977. 3. Forgy, C. L. OPS4 User's Manual Department of Computer Science, Carnegie-Mellon University, 1979. 4. Just, M. A., & Carpenter, P. A. Inference processes during reading: reflections from eye.fixations. In Eye Movements, ~d the Higher Psychological Functions, J. W. Senders, D. F. Fisher, and R. A. Monty, Eds., Lawrence Erlbaum Associates, 1978. 5. Just, M. A., & Carpenter, P. A. "A theo~ of reading: from eye fixations to comprehension." Psychological Review (In Press). 6. McConkie, G. W., & Rayner, K. "The span of the effective stimulus during a fixation in reading." Perception and Psychophysics 17 (1975). 162
1980
43
An Experiment in Machine Translation INTRODUCTION Although funding for Machine Translation (MT) research virtua11y ended in the U.S. with the release of the ALPAC report [1] in 1966, there has been a continuing interest in this field. Rapid evolution of science and technology, coupled with increased world-wlde exposure of their products, demands more and more speed in trans- lation (e.g., in the case of operation and maintenance manuals). Unfortunately, this rapid evolution has made translation an even more d i f f i c u l t and time-consuming task. The large surplus of (presumably qualified) translators cited by the ALPAC report simply does not exist in many technical areas; the current state of affairs Finds instead a critical shortage. In addition, the proportion of scientific and technical literature • published in English is diminishing. As qualified human translators become more scarce and costs of human trans- lation rise while costs of purchase and operation of powerful computer systems fall, there must come a time when, if MT is feasible at all, it will be cost-effec- tive. It is appropriate, then, to investigate the state-of-the-art in MT with respect to two central ques- tions: is high-quality MT Feaslble (and in what sense); and if feasible, is it cost-effectlve? Thls paper reports the results of an experiment in hlghly automatic, high-quality machine translation. The LRC's MT system, METAL (for Mechanical Translation and Analysis of Languages), is an advanced, 'third genera- tion' system incorporating proven Natural Language Pro- cessing (NLP) techniques, both syntactic and semantic, and stands at the forefront of the MT research Frontier. In the experiment, METAL was employed in the translation of a 50-page taxt From German into Engilsh in order to determine whether the system as it exists can be effec- tively applied to current transiatlon needs, effective- ness to be determined by some objective measure of the quality and cost of machine (i.e., METAL) vs. human translation. EARLIER MT EFFORTS Since Bruderer [2] has recently published a complete survey of MT projects, and Hutchins [3] reviews the most important developments through 1977, we will men- tion only a few of the major efforts. The f i r s t popular demonstration of the possibilities in MT was provided by IBM and the Georgetown University group in 19S4 [4]. With a vocabulary of about 250 words and a grammar com- prising some six rules in what was called an "operation- al syntax", the system demonstrated some rudimentary capability in Russian to English translation. This in- stlgated a massive government funding effort over the next decade, and some 20 million dollars was invested in 17 different projects. By 1965 the Mark II Russian- English system [5] had been installed at the Foreign Technology Division of the U.S. Air Force at Wright- Patterson AFB, and the Georgetown system had been deli- vered to the Atomic Energy Commission at Oak Ridge Na- tlonal Laboratory and to EURATOM in Ispra, Italy. Re- viewing MT systems such as these at the request of the National Science Foundation, the Automatic Language Pro- cessing Advisory Committee (ALPAC) reported in 1966 that MT was slower, less accurate, and more expensive than human translation; further, that there was no predlcta- ble prospect of improvement in MT capability. Though strongly and perhaps justifiably criticized [6], this report soon resulted in the virtual elimination of MT funding in the U.S., and a sizeable reduction in fo~ign efforts as well. Jonathan Slocum I.inguistics Research Center The University of Texas Peter Toma, who was responsible for the installations at Oak Ridge and Ispra cited above, soon began private ef- forts at improving the Georgetown system. This culmina- ted in SYSTRAN [7], which replaced Mark II at WPAFB in 1970 and the Georgetown system at EURATOM in 1976. SYSTRAN was also used by NASA during the Apollo-Soyuz mission. In 1976 the Commission of European Communities adopted SYSTRAN for English to French translation; how- ever, an evaluation of its translations by the EEC post- editors in Brussels found the results to be far from sat- isfactory: "all the revisors had exhausted their patience before the end" [8]. Despite its generally low transla- tion quality, SYSTRAN is the most widely used MT system to date. its chief commercial competitor, LOGOS [9], is another example of a "direct" MT system. As in SYSTRAN, the analysis and synthesis components are separated but the linguistic procedures are designed for a specific source-language (SL) and target-language (TL) pair. In an evaluation by Slnaiko and Klare [10], LOGOS dld not fare well. 8ruderer [2] reports further development for translation into Russian, and experiments on French, Ger- man and Spanish, but provides few details. In an effort to correct the obvious inadequacies of these and other 'first generation' systems, which essen- tialiy translate word-for-word with no attempt at a uni- fied analysis at the sentence level, and which were de- veloped ab initio for a specific SL-TL pair, researchers began to investigate methods of analyzing sentences into structures from which in theory any TL could be genera- ted. There are two broad types of such 'second genera- tion' systems. One type produces analyses in a "neutral" structure, or 'interlingua~; the other produces SL syn- tactic structures which are transformed via a process called 'transfer' into a syntactic structure for the TL sentence. One example of the former approach is the system produced by the Centre d'~tudes pour la Traduc- tlon Automatique (CETA) at the University of Grenoble [11]. During the period from 1961 to 1971 this group developed a Russian to French MT system. An evaluation at the end of that period revealed that only 42~ of the sentences were being correctly translated. Some fail- ures were due to errors in the input, but the majority were due to programming errors, failure to produce a lexical analysis of a word or a syntactic analysis of a sentence, inefficiencies in the parser causing it to ap- ply too many rules, etc. The Traduction Automatique de l'Universit~ de MontrEal (TAUM) project [12] is an exam- ple of the transfer approach. There are flve grammars called "q-systems" to effect morphological and syntactic analysis of English, then transfer, then syntactic and morphological synthesis of French. Each such stage con- sists of a series of generalized tree-structure transfoP mations. The significance of TAUM is that, of the sec- ond-generation systems, it is the nearest to operational implementation: it is to be applied to the translation of aircraft maintenance manuals. in 1978 the European project EUROTRA was initiated, ap- parently adopting the newer Grenoble system ARIANE, in order to produce an advanced, second generation MT sys- tem for the eventual replacement of the f i r s t genera- tion system (SYSTRAN) currently in use [8]. The Greno- ble group, now tit]ed Groupe d'Etudes pour la Traduc- tion Automatlque (GETA), abando'ed their earlier ap- proach in light of its deficiencies and produced a sys- tem to translate in six passes: morphological analysis, multi-level (syntactic and semantic) analysis, lexical transfer, structural transfer, syntactic generation, and morphological generation. Multi-level analysis, struc- tural transfer, and syntactic generation are all effec- ted ~.a a general tree-to-tree transducer program, some- 163 what less powerfu; but merhaps more efficient than the Q- systems transduce r in TAUM; the other components have Spe- cial programs suited to their function. The emphasis in this project is apparently twofold: increased efficiency and reliability through adoption of components with the minimum necessary power, and decreased sensitivity to fai)ure in individual stages through the expedient of in- suring that every component has some output, even if such output is nothing more than the original input. If we have interpreted the VauQuois mimeo [8] properly, this must be ~elargest and most comprehensive MT project yet undertaken. DESCRIPTION OF METAL There are two different classifications of "generations" in MT systems. The first posits three generations (cur- rently) according to the following criteria: (I) trans- lation is word-for-word, with no significant syntactic analysis; (2) translation proceeds after obtaining a complete syntactic analysis of an input, with no signifi- cant semantic analysis; (3) translation proceeds after obtaining a complete semantic analysis of an input. The definition of 'third generation' says nothing about ex- tra-sentential information, and one might posit a 'fourth generation' which employs such information. The other classification proceeds according to the following criteria: (l) translation proceeds "directly" from the SL to the TL, and the SL is analyzed only to the minimum extent necessary to generate TL equivalents; (2) trans- lation proceeds "indirectly" by deriving a more-or-less standard analysis of the input, independent of the TL in- volved (but not necessarily of the SL), and then genera- ting TL output based on the standard analysis. Within this definition of 'second generation', as noted above, there are the 'transfer' vs. 'interlingua' approaches. We prefer to characterize METAL as a 'third generation' system according to the first classification given above because this makes it clear that METAL derives a sub- stantial semantic analysis, whereas the second definition of 'second generation' does not necessarily imply that semantic analysis of any kind is performed. METAL comprises two distinct components: the linguistic and the computational. The linguistic component con- sists of lexicons, phrase-structure grammar rules, case frames and transformations. SL and TL lexical entries include feature-value pairs encoding syntactic and sem- antic information such as grammatical category, inflec- tional class, semantic type, and case information (see Figure ]). Transfer lexical entries indicate how and under what conditions words or idioms in one language translate into words or idioms in another (see Figure 2). The phrase-structure rules may be augmented with procedures to determine their application via feature/ value tests, to add or copy features and values in the interpretation being constructed, to invoke case-frame routines, and to invoke specific or general transforma- tions. Case-frame routines determine semantic case re- lationships between verbs and nouns on the basis of syn- tactic and semantic features, and produce their output in the form of propositional trees. Transformatio'- are pattern-pairs that specify old and new tree structures; when invoked, a transformation attempts to match its "old" side against the current structural descriptor, and if successful converts it into one matching its "new" side. In the process, features and values may be tested and set arbitrari}y. This provides the grammar. with virtually unlimite~ -ontext sensitivity, but since no interpretation can affect the operation of the parser it still enjoys the advantages of context-free opera- tion. Finally, there is a method for scoring, or rating, interpretations; this allows the system to determine the "best" interpretation for translation, and also provides another mechanism for rejecting the application of any rule, viz, a score below cutoff. Figure 3 illustrates a typical grammar rule. ~ CAT (PREP) ALO (!n) (i) GC (A D~ (0) CN {S) (M) PLC (WI) (WI NF) % RO (TMP TOP LOC DST TAR EQU)) IN CAT (PREP) ALO (in) RO (DST LOC) PO (PRE) ON (VO)) INTO CAT (PREP) ALO (into) RO (OST LOC) PO (PRE) ON (VO)) Figure 1 German Preposition "in" and Two Corresponding English Prepositions CAT - grammatical category PREP - preposition ALO - all omorph 'in' - the string "in" 'i' (as in the string "im") GC - grammatical case A - accusative D - dative CN - contracted [with] S - (as in "ins") M - (as in "im") PLC - placement WI - word-initial WF - word-final RO - semantic role TMP - temporal TOP - topic LOC - locative DST - destination TAR - target EQU - equative PO - position PRE - pre-posed ON - onset Sound VO - vocalic (INTO (IN) PREP (GC A)) (IN (IN) PREP (GC O)) Figure 2 Transfer Entries for the German Preposition "in" The German PREPosition "in" (in parentheses) may trans- late into the English PREPosition "into" if the Gramma- tical Case of the German PP is 'Accusative'; it may tran- slate into the English PREPosition "in" if the Grammati- cal Case of the German PP is 'Dative'. Arbitrary numbers and types of conditions may be specified in transfer entries. The computational component, written in LISP, consists of the parser, the case-frame routines, the transforma- tion pattern-marcher, the transfer program, the genera- tor, and other procedures needed to drive and support the translation process. The parser is a highly effi- cient implementation of the Cocke-Kasami-Younger algo- 164
1980
44
Metaphor Comprehension - A special mode of language processing? (Extended Abstract) Jon M. Slack Open University, U.K. The paper addresses the question of whether a complete language understanding system requires special procedures in order to comprehend metaphorical language. To answer this question it is necessary to delineate the processes involved in metaphor comprehension and to det- ermine the uniqueness of such processes in the context of existing language understanding systems. I. DEFINING THE PROBLEM For the purposes of this paper a metaphor is defined as a linguistic input containing elements which result in a mismatch at the semantic level which the language understanding system attempts to interpret. For example, the sentence Billboards are warts on the landscape . 1 results in a semantic mismatch represented by the sentence Billboards are not a member of the category warts .... 2 which is encountered when the underlying concep- tual.structure is built for the sentence. However, the mismatch need not be restricted to elements of the sa/ne sentence. Even though the elements of a particular sentence may not result in a semantic mismatch, the whole sent- ence itself can be metaphorical with respect to its linguistic context. In this case the seman- tic mismatch is encountered when the interpret- er has difficulty connecting the conceptual representation of the sentence to the existing structure representing the context. Metaphor comprehension is defined as the proc- ess of mapping input metaphors onto connected conceptual representations. A model of metaphor comprehension is defined as the set of processes required to interpret elements of linguistic input which semantically mismatch their linguistic context. The mis- matching elements are referred to as the vehicle of the metaphor, while the linguistic context is known as the topic of the metaphor. 2. BUILDING A MODEL OF METAPHOR COMPREHENSION For the purposes of describing the model only sentences which contain both the vehicle and topic elements of the metaphor are considered. However, the comprehension processes described should apply to all classes of metaphor, includ- ing copula form (e.g. The world is a chessboard~ and verb-based metaphors (e.g. His words were dried by the sun). The model to be outlined is based on the analysis of a large sample of paraphrases produced by twenty subjects for a collection of over fifty metaphors. It is necessary to distinguish between differ- ent types of comprehension task. Although the comprehension processes are generally applic- able, whether the metaphorical input is in isolation or part of a larger linguistic input greatly influences the choice of comprehension strategy. In llne with most language comprehension sys- tems, the goal of metaphor comprehension is to build an integrated conceptual structure but from mismatching components. The model is based on the notion that the existence of the semantic mismatch makes it necessary to build a conceptual structure which embodies all the salient knowledge structures associated with the vehicle element. This process is referred to as vehicle expansion. Briefly, the comprehension process proceeds as follows: The elements of the sentence are mapped onto their dictionary entries and in attempting to build a conceptual structure a semantic restriction violation (semantic mismatch) is encountered - the subject, or object, or both do not conform to the semantic restrictions associated with the verb. The decision of which element represents the topic and which the vehicle of the metaphor is usually deter- mined by the extra-sentential context. However, for isolated metaphors the vehicle is usually the element which has the minimal match with the other elements. The knowledge structures which constitute the vehicle element concept are temporarily built into the conceptual structure representing the meaning of the sentence. This generated conceptual structure (C~S) is also connected to the topic knowledge structures (TKS). Comprehension is complete when the GCS and TKS are integrated into a single conceptual structure. Vehicle expansion generates a relatively large conceptual struct- ure which is pruned by means of the processes which integrate the GCS and TKS. For example, those knowledge structures which gave rise to the semantic mismatch are deleted from the conceptual structure because they contradict what is known about the topic. Metaphor comprehension involves interpreting the GCS in terms of the TKS. This is achieved by means of a matching process and the comprehension strategies described below." The matching process searches for matching know- ledge structures in the GCS and TKS. Various forms of the process are considered - fuzzy matching procedures, a spreading activation process. The outcome of the process is typic- ally, (a) an element of the GCS matches a low- saliency TKS element, (b) an element of the GCS contradicts a TKS element, or (c) a GCS element matches nor contradicts any TKS element. Out- come (b) prunes the GCS; outcomes (a) and (c) connect it to the TKS. 23 In addition, certain input metaphors require other comprehension strategies to be invoked, such as context construction or recursive meta- phor interpretation. Context construction - in some cases, although the matching procedure has constructed a number of important connections between the GCS and the TKS, it is necessary to search for addition- al knowledge structures which represent context information which was missing in the original input. The goal of the context construction procedure is to provide a context information in which the GCS-TKS connections are fully interpreted, that is, more fully integrated. The paper discusses the conditions necessary for the strategy to be employed. Recursive metaphor interpretation - many of the elements of the GCS are themselves metap~s in that they contain implicit semantic mis- matches. These metaphoric elements are inter- preted as metaphor inputs thereby making the comprehension process recursive. Due to the limited processing capacity and memory constraints of language understanding systems, the metaphor comprehension process requires a complex control structure. This control structure governs the use of the comp- rehension strategies and orders the vehicle expansion and matching processes. 3. COMPARISON WITH OTHER THE?RIES The paper compares the model of metaphor comp- rehension with established theories of metaphor within linguistics and psychology. 4. CONCLUSIONS The final part of the paper examines the relationship between metaphor comprehension and existing language comprehension systems. The processes which constitute the model of metaphor comprehension are common to many language understanding systems (Schank, Wilks, L)IR group, etc.). The comprehension of metaphor does not require a special set of processes to be developed, although the comprehension strategies may be specific to the comprehension task. Rather, a metaphor input forces the comprehension system to invoke a complex control structure to cope with the larger and richer knowledge domains which have to be handled. The main conclusion of the paper is that the notions of processing capacity, memory constraints and control structure are the most salient constructs in sLmulation metaphor comprehension. 24
1980
5
Interactive Discourse: Influence of Problem Context Panel Chair's Introduction Barbara Grosz SRI International The purpose of the special parasession on "Interactive Man/Machine Discourse" is to discuss some critical issues in the design of (computer-based) interactive natural language processing systems. This panel will be addressing the question of how the purpose of the interaction, or "problem context" affects what is said and how it is interpreted. Each of the panel members brings a different orientation toward the study of language to this question. My hope is that looking at the question from these different perspectives will ex- pose issues critical to the study of language in gener- al, and to the construction of computer systems that can co~nunicate with people in particular. Of course, the issue of the influence of "problem context" is separable from the issue of how one might get a computer system to take into account the effects of this context (and, yes, even whether that is possible). My hope is that those on the panel who are concerned with the construction of computer-based natural language processing systems will address some of the issues of "how" and that all of the panelists will consider the prior questions of what ef- fects there are and what general principles underlie how the "problem context" influences a dialogue. ples. There is no taxonomy of function (as I've used the word). How might such a taxonomy be constructed and used? What kinds of expectations are set up by different kinds of functions? What assumptions about the knowledge, beliefs, and goals that are shared by the participants are made by the dif- ferent functions? How do the constraints from function interact with those of domain? What kinds of "tools" are useful for examlning such is- sues? (e.g., what kinds of analysis of data can be done)? What happens when expectations generated by problem con- text (either function or domain) are violated? There are two separate aspects to the "problem context" that influence the participants' expectations and hence their utterances: (i) the function of the discourse, and, (2) the domain of discourse. Function: This aspect of the problem context concerns why the speaker and hearer are communicating and their relative roles in the communication. Casual conversa- tions, classroom discussions, task-oriented dialogues, and stories have very different functions. Although it is most reasonable to consider computer systems as par- ticipating in a restricted kind of dialogue (namely, a dialogue which arises from aiding a person in the solu- tion of some problem), it is still clear that such sys- tems may assume different roles, e.g., that of an expert (user is an apprentice), tutor (student), or supplier of information (e.g., from a large data base). Each of the different functions results in different kinds of goals (e.g., teaching requires a different kind of informlng than simple question answering) and each of the differ- ent roles will create different expectations on the part of the user and different needs in terms of the kinds of information the system has about the user. Domain: This aspect concerns what a speaker is talking about, the subject matter of the discourse. The struc- ture of the information being discussed has an effect on the language (of. Chafe's "The Flow of Language and the Flow of Thought", Linde's work on apartment descriptions and planning, my work on focusing in task-oriented dia- toques). Both of these aspects of "problem context" have global effects on what gets discussed and in what "units", and local effects on how speakers express the information they convey. Clearly the two aspects interact. For ex- ample, what a speaker chooses to discuss next depends both on why he is telling the hearer and on the informa- tion itself and what it is related to. Some questions to consider: -~n what ways are the effects of problem context manifest An individual utterances and larger discourse units? How do people's "conversational styles" differ? The above discussion of "function" gave several exam- 25
1980
6
SHOULD COMPUTERS WRITE SPOKEN LANGUAGE? Wallace L. Chafe University of California, Berkeley Recently there has developed a great deal of interest in the differences between written and spoken language. I joined this trend a little more than a year ago, and have been exploring not only what the specific differences are, but also the reasons why they might exist. The approach I have taken has been to look for differences between the situations and processes involved in speaking on the one hand and writing on the other, and to speculate on how those differences might be responsible for the observable differences in the output, ~at happens when we write and what happens when we speak are different things, both psychologically and socially, and I have been trying to see how what we do in the two situations leads to the specific things that we find in writing and speaking. I occasionally interact with the UNIX computer system at Berkeley, for various purposes. In the context of my concern about differences between writing and speaking, I have begun to wonder whether the kind of corm~unication we are used to receiving from computers is more like writing or speaking. You may think that computers obviously write to us. They send us messages that we can read off of a cathode ray tube, or that get printed out for us on a piece of paper. In that respect what computers produce is written language. But it comes at us in a way that is very different from the way written language usually does. Usually we are faced with a printed page on which the writing is all there, and has been there for a long time. The temporal process by which the writing was put there has absolutely no relevance to us as we peruse the page at our leisure. The timing of our reading is in no way controlled by the timing by which the words were entered on the page. My computer terminal, on the other hand, is steadily chugging away, producing language before my eyes at the rate of 30 characters a second. Under some circumstances I could wait until it had produced a whole page before I began to read. But I don't usually do that. I eagerly follow the steady flow of letters as they appear, Just as I would eagerly listen to the spoken sounds of someone who was telling me something I wanted to know. This processing in real time seems in that re- spect more like spoken language, although what is being produced is written. Furthermore, the computer system and I often, indeed characteristically, engage in quick exchanges, much like conversations, which is not what I am accustomed to doing with written language. So I want to suggest that when it is looked at from the point of view of the dichotomy between written and spoken language, the computer language we normally deal with is neither fish nor foul. It is produced in written form, but on the other hand it is produced in real time, and we are able to respond and interact as we are not able to do with a printed page. Recent work seems to have shown that there are a number of features which are characteristic of spoken language, and a number of other features characteristic of written. It is not that spoken language never contains any of the features of writtenness, or that written language never contains any of the features of spokenness. It is only that certain features tend to be associated with one or the other medium, and that the features become more polarized as one approaches the extremes of colloquial- ness on the one hand, or of literariness on the other. In between one finds various mixtures of literary talk and conversational writing. In looking for reasons why these distinguishing features exist, I have found it useful to attribute some of them to the temporal differences between writing and speaking, and some of them to the interactional differences. Temporally, writing as an activity is much slower than speaking. Speaking seems to be produced one "idea unit" at a time, each idea unit having a mean length of about 2 seconds, or 6 words. Every so often a sequence of idea units ends in a falling pitch intonation of the sort we identify with the ending of a sentence. Pauses usually occur between idea units, and longer pauses be- tween sentences. The idea units within a spoken sen- tence tend to be strung together in a coordinate fashion, typically with the word "and" appearing as a link~ There is little of the fancy syntax we find in written language, by which some idea units are subordinated to and embedded within others. It has been hypothesized that speakers' attention capacities are not great enough to allow them to engage in much elaborate syntax. The flow of idea units is enough to keep them occupied. Writing, on the other hand, is peculiar in that the pro- eess of writing itself occupies an inordinate amount of time, even though, once we get past the first grade, it doesn't require a great deal of attention. Thus, writers have a lot of extra time and attention available to them, and apparently they often use it to construct elaborate sentences. As a result, whereas the sentences of spoken language have a distinctly fragmented quality, those of written language tend to be more integrated, with much more attention paid to subordinating idea units within others in complex ways. This integration vs. fragmentation dimension seems to be at the root of a number of the features which distinguish writing from speaking. The other dimension I have been interested in seems to result from the different relation writers and speakers have to their respective audiences. Whereas speakers can interact directly with their listeners, obtaining ongoing confirmation, contradiction, and feedback, wri- ters cannot normally do so, but are constrained to pay more attention to producing something that will stand on its own feet when it is read by someone later on in a different place. We can speak of the greater involve- ment of speakers, as contrasted with the greater detach- ment of writers. Many of the specific features distin- guishing speaking and writing can be lined up on this involvement vs. detachment dimension. How can a computer produce language that is maximally congenial to us humans, given the familiarity we already have with the characteristics of spoken and written language? ~hat kind of human language should a computer simulate, in order that we can process it most easily? And to what extent is a computer able to produce such a simulation? Let's play with the assumption that we human users would feel most at home with a computer terminal with which we could converse in something resembling human conversa- tion, as close as this can be approximated by a machine which (I) can't yet make satisfactory sounds, but has to write what it says; and (2) doesn't know how to experi- ence involvement with a human being. Let's consider what this machine would need to do to make us feel that we were interacting in something like the way we inter- act when we use spoken language. Timing is one of the important factors. Instead of steadily producing letters at the rate of 30 a second, this machine might try producing language as spoken language is produced in real time. That would mean doing it at half the speed, for one thing: 15 charac- ters a second would be about normal for the way we assimilate spoken language, and perhaps the rate at 27 which we naturally take in information But we woul9 not want it spitting out one letter at a time at a steady rate, as it does now. That has little to do with the way we take in language, either spoken or written, under normal circumstances. Perhaps it should give us one word at a time, but I think it more likely that we would feel most comfortable with syllables: syl- lables timed to simulate the timing of syllables in nor- mal English speech. Roughly speaking, stressed syllables would be longer and unstressed syllables shorter. A careful study of the timing of natural speech could introduce more sophistication here. At the end of each idea unit -- on the average after every 6 words -- there would be at lease a brief pause, signaling the boundary of the idea unit and allowing time for processing. At the end of a sentence -- on the average after every 3 ides units -- the pause would be longer, and paragraph boundaries would be signaled by lonBer pauses. Idea units would be relatively fragented. Many of them would be connected by "end," and there would be little of the elaborate syntax one tends to find in written lenguage. As for involvement, the computer would need to learn that humans are imperfect recipients of information, end that redundancy end requests for confirmation are among the important devices to be used frequently in c~uni- catlng with them. Frequent direct reference to the addressee is another feature of involvement that the computer could easily learn to use. My terminal recently told me the following, at 30 steady characters per second" The "netlpr" co-----d, when executed between computer center machines, now sets the owner- ship of net queue files correctly so that "netrm" will remove them end they are listed by the "netq" comm"ud. While this is reasonably good written language, and com- prehensible as such, I am asking whether meaningful lin- guistic interaction in real time might not better proceed somethinB as follows, where you can imagine syllables oeing timed as they are timed in spoken English, brief pauses at the ends of llne~ end longer pauses where I have double-spaced (T is the terminal end U the user): T: Want to know about the "netlpr" command, where you type in "netlpr"? U: Sure. T: You can just use it between computer center machines, OK? Only if you're up here. U: Yeah, I know. T: OK. It'll show you who owns net queue files, if you went to know that. You ten use "nets" to get rid of the~, and you can get them listed with "netq". That clear? U: Yeah. One problem with this is that the user has to type at his or her normal typing rate, which will inevitably be much slower than speaking. But even so, the frag- mentation and involvement which make this machine's out- put more like spoken language might significantly increase the user's comfort end comprehension. To know whether that is really true calls for further detailed research on the features which distinguish spoken from written language, and tests of whether the introduction of such features into computer lenguege indeed makes a difference. Such research ought in any case to be rewarding beyond the bounds of this particular appli- cation. 28
1980
7
Signalling the Interpretation of Indirect Speech Acts Philip R. Cohen Center for the Study of Reading University of Illinois, & Bolt, Beranek and Newman, Inc. Cambridge, Mass. This panel was asked to consider how various "problem contexts" (e.g., cooperatively assembling a pump, or Socratically teaching law) influence the use of language. As a starting point, I shall regard the problem context as establishing a set of expectations and assumptions about the shared beliefs, goals, and social roles of those participants. Just how people negotiate that they are in a given problem context and what they know about those contexts are interesting questions, but not ones I shall address here. Rather, I shall outline a theory of language use that is sensitive ¢o those beliefs, goals, and expectations. The theory is being applied to characterize actual dialogues occurring in the Familiar task-orlented slt- uation ~O.1, in which an expert instructs a novice Co do something, in our case to assemble a toy water pump. In such circumstances, the dialogue participants can be viewed as performing speech acts planned, prlmarl]y, to achieve goals set by the task. Other contexts undoubted- ly emphasize the instrumental uses of language (e.g.,~) but those problem contexts will not be considered here. The application of a model of speech act use to actual dialogue stresses the need For sources of evidence to substantiate predictions. The purpose of this paper is to point to one such source -- speaker-reference ~9]- The natural candidate for a theory of instrumental use of speech acts is an account of rational action ~ -- what is typically termed "planning". However, contrary to the assumption of most planning systems, we are in- terested in the planning of (usually) cooperative agents who attempt to recognize and facilitate the plans of their partners ~,h,5,]6,20]. Such helpful behavior is independent of the use of language, but is the source of much conversational coherence. A plan based theory of speech acts specifies that plan recognition is the basis for inferring the illocuCionary force(s) of an utterance. The goal of such a theory is to formalize the set of possible plans underlying the use of particular speech acts Co achieve a given set of goals. In light of the independent motivation for plan generation and recognition, such a Formalism should treat commun- icative and non-communlcatlve acts uniformly, by stating the communicative nature of an illocutlonary act as part of chat act's definition. A reasoning system, be it human or computer, would then not have to employ special knowledge about communicative acts; it would simply at- tempt Co achieve or recognize goals. The components of speech act p]annlng and recognition systems developed so far include: a Formal language for describing mental states and states of the physical and social worlds, operators for describing changes of state, associations of utterance features (e.g., mood) with cer- taln operators, and a set of plan construction and re- cognition inferences. Illocutionar'y acts are defined as operators that primarily affect the mental states of speakers and hearers L3,8,13,I7J. To be more specific, in the most fully developed at- tempt at such a theory, Perraulc and Allen ~ show how plan recognition can "reason out" a class of indirect speech acts. Briefly, they define "surface =' speech act operators, which depend on an utterance's mood, and op- erators For illocutionary acts such as requesting. Plan recognition involves inferences of the form "the agent intended to perform action X because he intended to ach- ieve its effect in order to enable him to do some other action Y". Such inferences are applied to surface speech act operators (characterizing, for instance, "Is the salt near you?") to yield iilocutionary operators such as * For this brief paper, I shall have Co curtail discussion of the planning/plan recognition literature. requests to pass the salt. The remainder of this paper attempts to illustrate the kinds of predictlons made by the theory,.and the use of anaphora to support one such prediction." Consider the following dialogue fragment (transmitted over teletype) in the water pump context described earlier: Expert: l). '~e need a clear bent tube For the bottom hole." Novice: 2). "OK, i t ' s done." Expert: 3). "OK, now, start pumping" The example is constructed to illustrate my point, but it does not "feel" a r t i f i c i a l . Experiments we are conducting show analogous phenomena in telephone and teletype modes. The theory predicts two inference paths For utterance I -- "helpful" and "intended". In the Former case, the novice observes the surface-lnform speech act indicated by a declarative utterance, and interprets it simply as an inform act that communicates a joint need. Then, be- cause the novice is helpful, she continues to recognize the plan behind the expert's utterance and attempts to further it by performing the action of putting the spout over the hole. The novice, therefore, is acting on her own, evaluating the reasonableness of the plan inferred for the expert using private beliefs about the expert's beliefs and intentions. Alternatively, she could infer that the expert intended for it to be mutually believed that he intended her to put on the tube. Thus, the novice would be acting because she thinks the expert intended for her ¢o do so. Later, she could summarize the expert's utterance and intentions as a request ~7]. Perrault and Alien supply heuristics that would predlct-~" the preferred inference route to be the "intended" path since it is mutually believed that putting the tube on is the relev- ant act, and his intending that she perform pump-related acts is an expected goal in this problem context. To use Perrault and Alien's model For analyzing conversation, such predictions must be validated against evidence of the novice's interpretation of the expert's intent. Signalling Interpretation of Intent For this problem context and communication modality, the novice and expert shared knowledge that the exoert will attempt to get the novice to achieve each subgoal of the physical task, and the novice must indicate suc- cessful completion of those subtasks. However, not all communicative acts achieving the goal of indicating suc- cessful completion provide evidence of the novice's in- terpretation of intent. For instance, the novlce might say "I've put the bent tube on" simply to keep the expert informed of the situatlon. Such an informative act could arise if the problem context and prior conversation dld not make the salience of putting the tube on mutually known. To supply evidence of the novice's interpretation of intent, her response must pragmatically presuppose that interpretation. In our example, the novice has used " i t " to refer to the action she has performed. It has been proposed that definite and pronomlnal/pro-verbal reference requires mutual belief chat the object in question_ --is in Focus O0,,s] and satisfies the "descript,on'l t6,l . Assuming that the,_inferring of mutually believed goals places them in focusU~ , the shared knowledge needed to refer using " i t " is supplied by only one of the above interpretations -- the one summarizable as an indirect request. Robinson ~!~ has identified this problem of reference to actions and has implemented a system to resolve them. In chls paper, I stress the importance of that work to theories of speech act use. 29 Other signals of the interpretation of intent need to be identified to explain how the expertis "OK, now start pumping" communicates that he thinks she has inter- preted him correctly -- mutual signalling of intent and its interpretation is central to conversational Success. A formal theory that could capture the belief, in- tention, and focus conditions for speaker-reference is thus clearly needed to validate models of speech act use. A plan-based theory might accomodate such an analysis via a decomposition of currently primitive surface speech acts to include reference acts [2,18J. By planning ref- erence acts to facilitate the hearers' plans (of. ~43), a system could perhaps also answer questions coopera- tively without resorting to Gricean maxims or "room theories" [19.~. I have given a bare bones outline of how a descrip- tion of speaker-reference can ser~e as a source of em- pirical support to a theory of speech acts. However, much more research must take place to flesh out the theoretical connections. I have also deliberately av- oided problems of computation here, but hope the panel will discuss these issues, especially the utility of computational mode is to ethnographers of conversation. Acknowledgements: I would llke to thank Chip Bruce, Scott Fertig, and Sharon 0vlatt for comments on an earlier draft. References: 1. Allen, J. A plan-based approach to speech act ~eco~- nitlon (Tech. Pep. No. 131/79). Toronto: Universi., or Toronto, Department of Computer Science, January 1979o 2. Appelt, 0. Problem-solving applied to language gen- eration. (This volume). 3- Bruce, B. Belief systems and language understandln~ (BBN Report NO. Z973). Cambridge, Hess.: Bolt, Beranek and Newman, January 1975. 4. Bruce, B., & Newman, 0. Interacting plans. Cocjnl- tire Science, 1978, ~, 195-233. 5. Carbonell, J. G. Jr. POLITICS: Automated ideologi- cal reasoning. Co~nitlve Science, 1978, ~, 27-51. 6. Clark, N. H., & hsrshell, C. Oeflnite reference and mutual knowledge. In A. K. Joshl, I. A. Sag, & B. L. Webber (Eds.), Proceedings of the Workshop on Computa- tional Aspects of Linguistic Structure and Discourse Setting. Ne, York: Cambridge University Press, in press. 7. Cohen, P. R., & Levesque, H. L. Speech acts and the recognition of shared plans. In Proceedlngs: Annual meetin~ of the Canadian Societ~ for the Computational Study of Intei, li~ence, Victoria, B.C., 19B0. B. Cohen, P. R., & Perrauit, C. R. Elements of a plan- based theory of speech acts. Co~nittve Science, 1979, ~, 177-212. 9. Donnell4m, K. Speaker references, descriptions, and anaphora. In P. Cole (Ed.), Syntax and semantics (Vol. ~): Pra~matics. New York: Academic Press, 1978. 10. Grosz, B. The representation and use of focus in dialo~ue understandin~ (Technical Note 151). Reflio Park, Calif.: Stanford Research Institute, Artificial Intelli- gence Center, July 1977. I1. Hobbs, J. R., 8 Evans, D. E. Conversation as planned behavior (Technical Note 203). ~enlo Park, Callf.: Stanford Research Institute, Artificial Intelligence Center, 1979. 12. Norgan, J. L. Toward a rational model of discourse comrehension, in O. Waltz (Ed.), Proceedings: Theoret- cal Issues in Natural Language Understandinq. Urbane: University of Illinois, Coordinated Science Laboratory, 1978. 13. Perrault, C. R., & Allen, J. F. A plan-based anal- ysis of indirect speech acts. In submission. 14. Perrauit, C. R., & Cohen P.R. Inaccurate refer- ence. In A. K. Joshi, I. A. t jg, & B. L. Webber (Eds.), Proceedings of the Workshop on Computational Aspects of Linguistic Structure and 0iscourse Setting. New York: Cambridge University Press, in press. 15. Robinson, A. E. The interpretation of verb ~hrases in dialo9s (Technical Note 206). Henio Park, Calif.: Stanford Research Institute, Artificial Intelligence Center, 1980. 16. Schank, R., S Abe lson, R. Scripts, plans r ~oals, and understandln~. Hillsdale, N.J.: Erlbaum, 1977. 17. Solidi, C. F. Understanding human action, in Proceedings of the conference on Theoretical Issues in Natural Language Processing. Cambridge, ~ass., 1975. 18. Seerte, J. R. Speech acts: An essay in the philos- ophy of language. Cambridge: Cambridge University Press, 1969. 19. Shannon, B. ~/here-questions. In Proceedings of the Seventeenth Annual P~etin~ of the ACL, San Diego, 1979. Pp. 73-75. 20. Wllensky, R. Understandin~ 9De i-based stories (Research Rap. No. 140). New Haven, Conn.: Yale Univer- sity, 0apartment of Computer Science, September 1978. 30
1980
8
PARASESSION ON TOPICS IN INYEZRACIXVE DISCOURSE INFLUENCE OF THE PROBLEM CONTEXT* Ar,avind K. Joshi Department of Computer and Infornmtion Science Room 268 Moore School University of Pennsylvania Philadelphia, PA 19104 My consents are organized within the framework suggested by the Panel Chair, Barbara Grosz, which I find very appropriate. All of my conlnents pertain to the various issues raised by her; however, wherever possible I will discuss these issues more in the context of the "infor- mation seeking" interaction and the data base doma/n. The primary question is how the purpose of the inter- action or "the problem context" affects what is said and how it is interpreted. The ~ separate aspects of this question that must be considered are the func- tion and the domain of the discourse. I. Types of interactions (functions) : i. 1 We are concerned here about a computer system par- ticipating in a restricted kind of dialogue with a person. A partial classification of some existing interactive systems, as suggested by Grosz, is as follows. I have r~_named the third type in a somewhat more general fashion. Paz-ticipant Pl Participant P2 (Computer system) (Person) Type A Expert Apprentice Type B Tutor Student Type C Information Information provider seeker (some sor~c of large and con~lex data base or knowledge base) Each type subsumes a variety of subtypes. For example, in type C, subtypes arise depending on the kind of infoz~ation ava~l~hle and the type of the user. (More on this later when we discuss the interaction of constraints on function and domain). I. 2 It should be noted also that these differ~_nt types are not r~ally completely independent; inf~tion seeking (Type C) is often done by the ap~tice (Type A) and student (Type B), and some of the expla/ning done by t-utor~ (Type B) is also involved in the Type C interaction, for exa~le, when P1 is trying to ex- plain to P2 the st-ruc%%u~e of the data base. 1.3 The roles of the two par~cicipants are_ also not fixed completely. In the type C interaction, some- times P2 paz~ly plays the role of an ex~ (or at least appears to do so) believing that his/her ex~ advice may help the system answer the question more 'easily' or 'efficiently'. For example l, in a pollu- tion data base P1 may ask: Has company A dumped any ,~Bstes last week? and follow up with advice: arsenic first. In ~ expert-apprentice interactlon, the e xper~c's advice is assumed to be useful by the apprentice. In the data base domain it is not clear whether the 'expert' advice provided by the user is always useful. It does however provide infor~ration about the user which can be helpful in presenting the response in an appropriate manner; for example, if arsenic indeed was one of the wastes dumped, -~hen, per- haps, it should be lis:ed first. 1.4 The interactions of the type we are concerned about here are all meant to aid a person in some fashion. Hence, a general characterization of all these t~/pes is a helping function. However, it is useful to distin- guish the types depending on whether an information or information sharin~ interaction zs involved. C--interaction is przmarily information seeking, although some sharing interaction is involved also. This is so because information sharing facilitates in- formation seeking, for example 2 , when Pl explains the structure of the data base to P2, so that P2 can engage in infor~nation seeking more effectively. Type A and B are more information sharing than infornmtion seeking interactions. i. S Another useful distinction is that type C interac- tion has more of a service function than types A and B which have more of ~ining function. Training in- volves more of information sharing, while service in- volves more of providing infornmtion requested by the user. 2. Information about the user: 2 .i By user we usually mean user type and nor a spe- cific user. User inforr~ation is essential in deter- minJ_ng expectations on the par~ of the user and the needs of the user. Within each type of interaction there can be many user types and the same infoz~nation may be needed by these different types of users for different reasons. For exan~le, in t-/pe C interaction, pr~_r~gist-ration iIlfor~ation about a course scheduled fox" the foz~chcoming t~ may be of interest to an in- st-cuctor because he/she wants to find out how popular his/her course is. On the other hand, the same data is useful to the regisrrer for deciding on a suitable r~x)m assigr~nent. The data base system will often pro- vide different views of the same data to different user types. 2.2 In general, knowledge about the user is necessar~, at leas~ in the type C interaction in order to decide (i) how to present the requested information, (ii) what additional information, beyond that ex- plicitly requested, might be usefully pr~esented (this aspect is not independent of (i) above), (iii) what kind of responses the system should provide when the user's misconceptions about the domain * This work was par~ially supported by the NSF grant MCS79-08401. I ,~Bnt to thank Eric Mays, Kathy McKeown, and Bonnie Webber for their valuable conments on an earlier draft of this paper. 31 (i.e., both The ~crure and content of the data base, in short, what can be talked about) are detected. (More about this in Section 5). 3. Conversational style: 3.1 In the type C interaction, The user utterances (more precisely, user's Typewritten input) are a series of questions separated by the system's responses. By and large, the system responds to the current question. However, knowledge about the preceding interaction i.e., discourse context (besides, of course, the information about the user) is essential for tracking the "topic" and thereby deter~nining the "focus" in the current question. This is especially importa~nz for derer~Iining how to present the answer as well as how to provide appropriate responses, when user's misconceptions are detected. Type A and B interactions perhaps involve a much more structured dialogue where the sZru:rure has its scope over much wider stretches of discourse as co~d to the ai@]ogues in the Type C interactions, which appear to be less strucru~. 3.2 The type of interaction involved certainly affects the conversational style; however, li%-tle is known about conversational style in interactive man/machine communication. Folklore has it that users adapt very rapidly to the system's capabilities. It might be useful to compare this situation to that of a person talking to a foreigner. It has been claimed that natives talking to foreigners deliberately change their conversational style = (for example, slowing down their speech, using single words, repeating certain words, end even occasionally adopting some of the foreigner's style, etc. ). It may be that users rr~-at the computer system as an expert with respect to the knowledge of the domain but lacking in some communicative skills, much like a native talking to a foreigner. Perhaps it is misleading to Treat man/machine interact- ive discourse as just (hopefully better and better) approximations to h ~ conversational interactions. No matter how sophisticated these systems become, they will at the ve.~y least lack the face to face interac- tion. It may be That there are certain aspects of these interactions that are peculiar to This modaliry and will always rema/m so. We seem to know so little about these aspects. These remarks, perhaps, belong .more to the scope of the panel on social context than to the scope of this panel on the problem context. 4. Relation of expectations and functions: ~.i In the information seeking interaction, us,~11y, the imperative force of the user's questions is to have the system bring it about that The use~- comes to know whatever he/she is asking foP. Thus in asking the question Who is r~istered in CIS 591? the user is in- terested in knowing who is registered in CIS 591. The user is normally not interested in how the syst~n got the answer. Ln the Type A and B in--actions the imperative force of a question from the user (apprentice or student) can either be the same as before or it can have the imperative force of making the system show the user how the answer was obtained by the system. 4.2 ~.n the data base domain, although, primarily the user is interested in what the answer is and no~ in how it wa obtained, this need not be the case always. Somet..~s the user would like to have the answer accom- panied by how it was obtained, the 'access paths' through the--~ta base, for example. 4.3 Even when only the what answer is expected, often the presentation of the answer has to be accompanied by some 'supportive' information to make the response use- ful to the user 4 . For exa~le, along with the student name, his/her department or whether he/she is a Eradua1~ or under~duate student would have to be stated. If telephone numbers of students are requested then along with the telephone numbers, the corre_sponding names of students will have to be provided. S. Shared knowledge and beliefs: 5.! The shared beliefs and goals are embodied in the system's knowledge of the user (i.e., a user model). It is important to assume that not only the system has the knowledge of the user but that the user assumes that the system has this knowledge. This is very necessary to generate appropriate cooperative responses and their being correctly understood as such by the user. In or~ina_-y conversations this type of knowiec~e could lead to an infinite regmess and hence, the need to require the shared knowledge to be ',u/rual knowle~e'. However, in the current da~a base systems (and even in the expert-epvrentice and tutor-student interactions) I am not aware of situations that truly lead to some of the well krK~an prDblems about 'mutual knowledge' 5.2 As regards the knowledge of the data base itself (both structure and content), the system, of course, has this knowledge. However, it is not necessary that the user has this knowledge. In fact very often the user's view of The data base will be different from the system's view. For large and complex data bases this is more likely to be the case. The system has to be able to discern the user's view and present the answers, keeping in mind the user's view, ~Tuile insuring that his/her view is consistent with the system's view. S. 3 When the system recognizes some disparity between its view and the user's view, it has to provide appro- priate corrective responses. Users' misconceptions could be either extensional (i.e., about the content of the data base) or intensional (i.e., about the structure of the data base) ~ . Note that the ex- tensional/inTensional distinction is from the point of view of the system. The user may not have made the distinction in that way. Some simple examples of corrective r~_sponses are as follows. A user's ques- tion: Who took CIS 591 in Fall 19797 presumes that CIS 591 was offered in Fall 1979. If ~his ~as not the case then a response None by the system would be misleading; rather the response should be that CIS 591 was not offered in Fall 1979. This is an instance of an extensional failure. An example of intensional failure is as follows. A user's question: How man 7 under~aduates taught courses in Fall 19797 pr~su~es (among other things) that undergraduates do teach courses. This is an intensional presumption. If it is false then once again an answer None would be mis- leading; rather the response should--~ that under ~ graduates are nor perm ~Ted to teach coUrSes, faculty members teach courses, and graduate students teach courses. The exact nature of this response depends on the s~:rucrure of the data base. 5. Co~lexir~ of The domain: 6 .i Iu each type of interaction the complexity of the interaction depends both on the nature of the interac- tion (i.e., function) as well as the domain. In many ways the complexity of the interaction ultimately seems to depend on the cc~nplexity of the domain. If the task itself is not very complex (for example, boiling water for tea instead of assembling a pump) the task oriented expert-apprentice interaction cannot be very complex. On the other hand data base interaction which appear to be simple at first sight become in- creasingly complex when we begin to consider (i) dyna- mic data bases (i.e., they can be updated) and the associated problems of monitoring events (ii) data bases with n~itiple views of data, (iii) questions whose answers z~equiz~ the system to make fairly deep inferences and involve computations on the data base i.e., the answers are not obtained by a straigbtfor%mz~ retrieval process, etc. NOTES: i. As in the PLIDIS system described by Genevieve 2. As in Kathy McKeown's current work on gene_~ating descriptions and explanations about data base st-~ucrure. 3. For exa~le, by R. Rammurri in hem talk on 'Strategies involved in talking to a foreigner' at the Penn Linguistics Forth 1980 (published in Penn Review of Linguistics, Vol. 4, 1980). ~. Many of my comments about supportive information and corrective responses when misconceptions about the ccntent and the stTucrure of the data base are detected are based on the work of Jerry Kaplan and Eric Mays.
1980
9
A Practical Comparison of Parsing Strategies Jonathan Slocum Siemens Corporation INTRODUCTION Although the literature dealing with formal and natural languages abounds with theoretical arguments of worst- case performance by various parsing strategies [e.g., Griffiths & Petrick, 1965; Aho & Ullman, 1972; Graham, Harrison & Ruzzo, Ig80], there is little discussion of comparative performance based on actual practice in understanding natural language. Yet important practical considerations do arise when writing programs to under- stand one aspect or another of natural language utteran- ces. Where, for example, a theorist will characterize a parsing strategy according to its space and/or time requirements in attempting to analyze the worst possible input acc3rding to ~n arbitrary grammar strictly limited in expressive power, the researcher studying Natural Language Processing can be justified in concerning himself more with issues of practical performance in parsing sentences encountered in language as humans Actually use it using a grammar expressed in a form corve~ie: to the human linguist who is writing it. Moreover, ~ry occasional poor performance may be quite acceptabl:, particularly if real-time considerations are not invo~ed, e.g., if a human querant is not waiting for the answer to his question), provided the overall average performance is superior. One example of such a situation is off-line Machine Translation. This paper has two purposes. One is to report an eval- uation of the performance of several parsing strategies in a real-world setting, pointing out practical problems in making the attempt, indicating which of the strate- gies is superior to the others in which situations, and most of all determining the reasons why the best strate- gy outclasses its competition in order to stimulate and direct the design of improvements. The other, more important purpose is to assist in establishing such evaluation as a meaningful and valuable enterprise that contributes to the evolution of Natural Language PrcJessing from an art form into an empirical science. T~t is, our concern for parsing efficiency transcends the issue of mere practicality. At slow-to-average parsing rates, the cost of verifying linguistic theories on a large, general sample of natural language can still be prohibitive. The author's experience in MT has demonstrated the enormous impetus to linguistic theory formulation and refinement that a suitably fast parser will impart: when a linguist can formalize and encode a theory, then within an hour test it on a few thousand words of natural text, he will be able to reject inadequate ideas at a fairly high rate. This argument may even be applied to the production of the semantic theory we all hope for: it is not likely that its early formulations will be adequate, and unless they can be explored inexpensively on significant language samples they may hardly be explored at all, perhaps to the extent that the theory's qualities remain undiscovered. The search for an optimal natural language parsing technique, then, can be seen as the search for an instrument to assist in extending the theoretical frontiers of the science of Natural Language Processing. Following an outline below of some of the historical circumstances that led the author to design and conduct the parsing experiments, we will detail our experimental setting and approach, present the results, discuss the implications of those results, and conclude with some remarks on what has been l~rned. The SRI Connection At SRI International the~thor was responsible for the development of the English front-end for the LADDER system [Hendrix etal., 1978]. LADDER was developed as a prototype system for understanding questions posed in English about a naval domain; it translated each English question into one or more relational database queries, prosecuted the queries on a remote computer, and responded with the requested information in a readable format tailored to the characteristics of the answer. The basis for the development of the NLP component of the LADDER system was the LIFER parser, which interpreted sentences according to a 'semantic grammar' [Burton, 1976] whose rules were carefully ordered to produce the most plausible interpretation first. After more than two years of intensive development, the human costs of extending the coverage began to mount significantly. The semantic grammar interpreted by LIFER had become large and unwieldy. Any change, however small, had the potential to produce "ripple effects" which eroded the integrity of the system. A more linguistically motivated grammar was required. The question arose, "Is LIFER as suited to more traditional grammars as it is to semantic grammars?" At the time, there were available at SRI three production-quality parsers: LIFER; DIAMOND, an implementation of the Cocke- Kasami~nger parsing algorithm programmed by William Paxton of SRI; and CKY, an implementation of the identical algorithm programmed initially by Prof. Daniel Chester at the University of Texas. In this environment, experiments comparing various aspects of performance were inevitable. The LRC Connection In 1979 the author began research in Machine Translation at the Linguistics Research Center of the University of Texas. The LRC environment stimulated the design of a new strategy variation, though in retrospect it is obviously applicable to any parser supporting a facility for testing right-hand-side rule constituents. It also stimulated the production of another parser. (These will be defined and discussed later.) To test the effects of various strategies on the two LRC parsers, an experiment was designed to determine whether they interact with the different parsers and/or each other, whether any gains are offset by introduced overhead, and whether the source and precise effects of any overhead could be identified and explained. THE SRI EXPERIMENTS In this section we report the experiments conducted at SRI. First, the parsers and their strategy variations are described and intuitively compared; second, the grammars are described in terms of their purpose and their coverage; third, the sentences employed in the comparisons are discussed with regard to their source and presumed generality; next, the methods of comparing performance are detailed; then the results of the major experiment are presented. Finally, three small follow- up experiments are reported as anecdotal evidence. The Parsers and Strategies One of the parsers employed in the SRI experiments was LIFER: a top-down, depth-first parser with automatic back-up [Hendrix, 1977]. LIFER employs special "look down" logic based on the current word in the sentence to eliminate obviously fruitless downward expansion when the current word cannot be accepted as the leftmose element in any expansion of the currently proposed syntactic category [Griffiths and Petrick, 1965] and a "well-formed substring table" [Woods, 1975] to eliminate redundant pursuit of paths after back-up. LIFER sup- ports a traditional style of rule writing where phrase- structure rules are augmented by (LISP) procedures which can reject the application of the rule when proposed by the parser, and which construct an interpretation of the phrase when the rule's application is acceptable. The special user-definable routine responsible for evaluating the S-level rule-body procedures was modified to collect certain statistics but reject an otherwise acceptable interpretation; this forced LIFER into its back-up mode where it sought out an alternate interpretation, which was recorded and rejected in the same fashion. In this way LIFER proceeded to derive all possible interpretations of each sentence according to the grammar. This rejection behavior was not entirely unusual, in that LIFER specifically provides for such an eventuality, and because the grammars themselves were already making use of this facility to reject faulty interpretations. By forcing LIFER to compute all interpretations in this natural manner, it could meaningfully be compared with the other parsers. The second parser employed,in the 5RI experiments was DIAMOND: an all-paths bottom-up parser [Paxton, lg77] developed at SRI as an outgrowth of the SRI Speech Understanding Project [Walker, 1978]. The basis of the implementation was the Cocke-Kasami-Younger algorithm [Aho and Ullman, 1972], augmented by an "oracle" [Pratt, 1975] to restrict the number of syntax rules considered. DIAMOND is used during the primarily syntactic, bottom-up phase of analysis; subsequent analysis phases work top-down through the parse tree, computing more detailed semantic information, but these do not involve DIAMOND per se. DIAMOND also supports a style of rules wherein the grammar is augmented by LISP procedures to either reject rule application, or compute an interpretation of the phrase. The third parser used in the SR~ experiments is dubbed CKY. It too is an i~lementation of the Cocke-Kasami- Younger algorithm. Shortly after the main experiment it WAS augmented by "top-down filtering," and some shrill- scale tests were conducted. Like Pratt's oracle, top- down filtering rejects the application of certain rules dlstovered'up by the bottom-up parser specifically, those that a top-aown parser would not discover. For example, assuming a grammar for English in a traditional style, and the sentence, "The old man ate fish," an ordinary bottom-up parser will propose three S phrases, one each for: "man ate fish," "old man ate fish," and "The old man ate fish." In isolation each is a possible sentence. But a top-down parser will normally propose only the last string as a sentence, since the left contexts "The old" and "The" prohibit the sentence reading for the remaining strings. Top-down filtering, then, is like running a top-down parser in parallel with a bottom-up parser. The bottom-up parser (being faster at discovering potential rules) proposes the rules, and the top-down parser (being more sensitive to context) passes judgement. Rejects are discarded immediately; those that pass muster are considered further, for example being submitted for feature checking and/or semantic interpretation. An intuitive prediction of practical performance is a somewhat difficult matter. ~FER, while not originally intended to produce all interpretations, does support a reasonably natural mechanism for forcing that style of analysis. A large amount of effort was invested in making LIFER more and more efficient as the LADDER linguistic component grew and began to consume more space and time. In CPU time its speed was increased by a factor of at least twenty with respect to its original, and rather efficient, implementation. One might therefore expect LIFER to compare favorably with the other parsers, particularly when interpreting the LADDER grammar written with LIFER, and only LIFER, in mind. DIAMOND, while implementeing the very efficient Cocke-Kasami-Younger algorithm and being augmented with an oracle and special programming tricks (e.g., assembly code) intended to enhance its performance, is a rather massive program and might be considered suspect for that reason alone; on the other hand, its predecessor was developed for the purpose of speech understanding, where efficiency issues predominate, and this strongly argues for good performance expectations. Chester's implementation of the Cocke-Kasami-Younger algorithm represents the opposite extreme of startling simplicity. His central algorithm is expressed in a dozen lines of LISP code and requires little else in a basic implementation. Expectations here might be bi-modal: it should either perform well due to its concise nature, or poorly due to the lack of any efficiency aids. There is one further consideration of merit: that of inter- programmer variability. Both LIFER and Chester's parser were rewritten for increased efficiency by the author; DIAMOND was used without modification. Thus differences between DIAMOND and the others might be due to different programming styles -- indeed, between DIAMOND and CKY this represents the only difference aside from the oracle --while differences between LIFER and CKY should reflect real performance distinctions because the same programmer (re)implemented them both. The Grammars The "semantic grammar" employed in the SRI experiments had been developed for the specific purpose of answering questions posed in English about the domain of ships at sea [Sacerdoti, 1977]. There was no pretense of its being a general grammar of English; nor was it adept at interpreting questions posed by users unfamiliar with the naval domain. That is, the grammar was attuned to questions posed by knowledgeable users, answerable from the available database. The syntactic categories were labelled with semantically meaningful names like <SHIP>, <ARRIVE>, <PORT>, and the like, and the words and phrases encompassed by such categories were restricted in the obvious fashion. Its adequacy of coverage is suggested by the success of LADDER as a demonstration vehicle for natural language access to databases [Hendrix et al., 1978]. The linguistic grammar employed in the SRI experiments came from an entirely different project concerned with discourse understanding [Grosz, 1978]. In the project scenario a human apprentice technician consults with a computer which (s expert at the disassembly, repair, and reassembly of mechanical devices such as a pump. The computer guides the apprentice through the task, issuing instructions and explanations at whatever levels of detail are required; it may answer questions, describe appropriate tools for specific tasks, etc. The grammar used to interpret these interactions was strongly linguistically motivated [Robinson, Ig8O]. Developed in a domain primarily composed of declarative and imperative sentences, its generality is suggested by the short time (a few weeks) required to extend its coverage to the wide range of questions'encountered in the LADDER domain. In order to prime the various parsers with the different frammars, four programs were written to transform each grammar into the formalism expected by the two parsers for which it was not originally writtten. Specifically, the linguistic grammar had to be reformatted for input to LIFER and CKY; the semantic grammar, for input to CKY and DIAMDNO. Once each of six systems was loaded with one parser and one grammar, the stage would be set for the experiment. 2 The Sentences Since LADDER's semantic grammar had been written for sentences in a limited domain, and was not intended for general English, it was not possible to test that grammar on any corpus outside of its domain. Therefore, all sentences in the experiment were drawn from the LADDER benchmark: the broad collection of queries designed to verify the overall integrity of the LADDER system after extensions had been incorporated. These sentences, almost all of them questions, had been carefully selected to exercise most of LADDER's linguistic and database capabilities. Each of the six sy~ems, then, was to be applied to the analysis of the same 249 benchmark sentences; these ranged in length from 2 to 23 words and averaged 7.82 words. Methods of Comparison Software instrumentation was used to measure the following: the CPU time; the number of phrases (instantiations of grammar rules) proposed by the parser; the number of these rejected by the rule-body procedures in the usual fashion; and the storage requirements (number of CONSes) of the analysis attempt. Each of these was recorded separately for sentences which were parsed vs. not parsed, and in the former case the number of interpretations was recorded as we11. For the experiment, the database access code was short-circuited; thus only analysis, not question answering, was performed. The collected data was categorized by sentence length and treatment (parser and grammar) for analysis purposes. Summary of the First Experiment The first experiment involved the production of six different instrumented systems -- three parsers, each with two grammars -- and six test runs on the identical set of 249 entences comprising the LADDER benchmark. The benchmark, established quite independently of the experiment, had as its raison d'etre the vigorous exercise of the LADDER system for the purpose of validationg its integrity. The sentences contained therein were intended to constitute a representative sample of what might be expected in that domain. The experiment was conducted on a DEC KL-IO; the systems were run separately, during low-load conditions in order to minimize competition with other programs which could confound the results. The Experimental Results As it turned out, the large internal grammar storage overhead of the DIAMOND parser prohibited its being loaded with the LADDER semantic grammar: the available memory space was exhausted before the grammar could be fully defined. Although eventually a method was worked out whereby the semantic grammar could be loaded into DIAMOND, the resulting system was not tested due to its non-standard mode of operation, and because the working space left over for parsing was minimal. Therefore, the results and discussion will include data for only five combinations of parser and grammar. Linguistic Grammar In terms of the number of grammar rules found applicable by the parsers, DIAMOND instantiated the fewest (aver- aging 58 phrases per sentence); CKY, the most (121); and LIFER fell in between (IO7). LIFER makes copious use of CONS cells for internal processing purposes, and thus required the most storage (averaging 5294 CQNSes per parsed sentence); DIAMOND required the least (llO7); CKY fell in between (1628). But in terms of parse time, CKY was by far the best (averaging .386 seconds per sen- tence, exclusive of garbage collection); DIAMOND was next best (.976); and LIFER was worst (2.22). The total run time on the SRI-KL machine for the batch jobs inter- preting the linguistic grammar (i.e., 'pure' parse time plus all overhead charges such as garbage collection, I/O, swapping and paging) was 12 minutes, 50 seconds for LIFER, 7 minutes, 13 seconds for DIAMOND, and 3 minutes 15 seconds for CKY. The surprising indication here is that, even though CKY proposed more phrases than its competition, and used more storage than DIAMOND (though less than LIFER), it is the fastest parser. This is true whether considering successful or unsuccessful analysis attempts, using the linguistic grammar. Semantic Grammar We will now consider the corresponding data for CKY vs. LIFER using the semantic grammar (remembering that DIAMOND was not testable in this configuration). In terms of the number of phrases per parsed sentence, CKY averaged five times as many as LIFER (151 compared to 29). In terms of storage requirements CKY was better (averaging 1552 CONSes per sentence) but LIFER was only slightly worse (1498). But in CPU time, discounting garbage collection, CKY was again significantly faster than LIFER (averaging .286 seconds per sentence compared to .635). The total run time on the SRI-KL machine for the batch jobs interpreting the semantic grammar (i.e., "pure" parse time plus all overhead charges such as garbage collections, I/O, swapping and paging) was 5 minutes, IO seconds for LIFER, and 2 minutes, 56 seconds for CKY. As with the linguistic grammar, CKY was significantly more efficient, whether considering successful or unsuccessful analysis attempts, while using the same grammar and analyzing the same sentences. Three Follow-up Experiments Three follow-up mini-experiments were conducted. The number of sentences was relatively small (a few dozen), and the results were not permanently recorded, thus they are reported here as anecdotal evidence. In the first, CKY and LIFER were compared in their natural modes of operation -- that is, with CKY finding all interpreta- tions and LIFER fCnding the first -- using both grammars but just a few sentences. This was in response to the hypothesis that forcing LIFER to derive all interpreta- tions is necessarily unfair. The results showed that CKY derived all interpretations of the sentences in slightly less time than LIFER found its first. The discovery that DIAMOND appeared to be considerably less efficient than CKY was quite surprising. Implementing the same algorithm, but augmented with the phrase-limiting "oracle" and special assembly code for efficiency, one might expect DIAMOND to be faster than CKY. A second mini-experiment was conducted to test the ntost likely explanation -- that the overhead of DIAMOND's oracle might be greater than the savings it produced. The results clearly indicated that DIAMOND was yet slower without its oracle. The question then arose as to whether CKY might be yet faster if it too were similarly augmented. A top-down filter modification was soon implemented and another small experiment was conducted. Paradoxically, the effect of filtering in this instance was to degrade performance. The overhead incurred was greater than the observed savings. This remained a puzzlement, and eventually helped to inspire the LRC experiment. THE LRC EXPERIMENT In this section we discuss the experiment conducted at the Lingui~icsResearch Center. First, the parsers and their strategy variations are described and ~ntuitively compared; second, the grammar is described in terms of its purpose and its coverage; third, the sentences employed in the comparisons are discussed with regard to their source and presumed generality; next, the methods of comparing performance are discussed; finally, the 3 results are presented. The Parsers and Strategies One of the parsers employed in the LRC experiment was the CKY parser. The other parser employed in the LRC experiment is a left-corner parser, inspired again by Chester [1980] but programmed from scratch by the author. Unlike a Cocke-Kasami-Younger parser, which indexes a syntax rule by its right-most constituent, a left-corner parser indexes a syntax rule by the left- most constituent in its right-hand side. Once the parser has found an instance of the left-corner constit- uent, the remainder of the rule can be used to predict what may come next. When augmented by top-down filter- ing, this parser strongly resembles the Earley algorithm [Earley, Ig70]. Since the small-scale experiments with top-down filtering at SRI had revealed conflicting results with respect to DIAMOND and CKY, and since the author's intuition continued to argue for increased efficiency in conjunction with this strategy despite the empirical evidence to the contrary, it was decided to compare the performance of both parsers with and without top-down filtering in a larger, more carefully controlled experiment. Another strategy variation was engendered during the course of work at the LRC, based on the style of grammar rules written by the linguistic staff. This strategy, called "early constituent tests," is intended to take advantage of the extent of testing of individual constituents in the right-hand-sides of the rules. Nor- mally a parser searches its chart for contiguous phrases in order as specified by the right-hand-side of a rule, then evaluates the rule-body procedures which might reject the application due to a deficiency in one of the r-h-s constituent phrases; the early constituent test strategy calls for the parser to evaluate that portion of the rule-body procedure which tests the first con- stituent, as soon as it is discovered, to determine if it is acceptable; if so, the parser may proceed to search for the next constituent and similarly evaluate its test. In addition to the potential savings due to earlier rule rejection, another potential benefit arises from ATN-style sharing of individual constituent tests among such rules as pose the same requirements on the same initial sequence of r-h-s constituents. Thus one test could reject many apparently applicable rules at once, early in the search -- a large potential savings when compared with the alternative of discovering all constituents of each rule and separately applying the rule-body procedures, each of which might reject (the same constituent) for the same reason. On the ocher hand, the overhead of invoking the extra constituent tests and saving the results for eventual passage to the remainder of the rule-body procedure will to some extent offset the gains. It is commonly considered that the Cocke-Kasami-Younger algorithm is generally superior to the left-corner algorithm in practical application; it is also thought that top-filtering is beneficial. But in addition ¢o intuitions about the performance of the parsers and strategy variations individually, there is the issue of possible interactions between them. Since a significant portion of the sentence analysis effort may be invested in evaluating the rule-body procedures, the author's intuition argued that the best cond}inatlon could be the left-corner parser augmented by early constituent tests and top-down filtering -- which would seem to maximally reduce the number of such procedures evaluated. The Grammar The grammar employed during the LRC experiment was the German analysis grammar being developed at the LRC for • use in Machine Translation [Lehmann et el., 1981]. Under development for about two years up to the time of the experiment, it had been tested on several moderately large technical corpora [Slocum, Ig80] totalling about 23,000 words. Although by no means a complete grammar, it was able to account for between 60 and gO percent of the sentences in the various texts, depending on the incidence of problems such as highly unusual constructs, outright errors, the degree of complexity in syntax and semantics, and on whether the tests were conducted with or without prior experience with the text. The broad range of linguistic phenomena represented by this material far outstrips that encountered in most NLP systems to date. Given the amount of text described by the LRC German grammar, it may be presumedto operate in a fashion reasonably representative of the general grammar for German yet to be written° The Sentences The sentences employed in the LRC experiment were extracted from three different technical texts on which the LRC MT system had been previously tested. Certain grammar and dictionary extensions based on those tests, however, had not yet been incorporated; thus it was known in advance that a significant portion of the sentences might not be analyzed. Three sentences of each length were randomly extracted from each text, where possible; not all sentence lengths were sufficiently represented to allow this in all cases. The 262 sentences ranged in length from 1 to 39 words, averaging 15.6 words each -- twice as long as the sentences employed in the SRI experiments. Methods of Comparison The LRC experiment was intended to reveal more of the underlying reasons for differential parser performance, including strategy interactions; thus it was necessary to instrument the systems much more thoroughly. Data was gathered for 35 variables measuring various aspects of behavior, including general information (13 variables), search space (8 variables), processing time (7 variables), and mamory requirements (7 variables). One of the simpler methods measured the amount of time devoted to storage management (garbage collection in INTERLISP) in order to determine a "fair" measure of CPU time by pro-rating the storage management time according to storage used (CONSes executed); simply crediting garbage collect time to the analysis of the sentence immediately at hand, or alternately neglecting it entirely, would not represent a fair distribution of costs. More difficult was the problem of measuring search space. It was not felt that an average branching factor computed for the static grammar would be repre- sentative of the search space encountered during the dynamic analysis of sentences. An effort was therefore made to measure the search space actually encountered by the parsers, differentiated into grammar vs. chart search; in the former instance, a further differentia- tion was based on whether the grammar space was being considered from the bottom-up (discovery) vs. top-down (filter) perspective. Moreover, the time and space involved in analyzing words and idioms and operating the rule-body procedures was separately measured in order to determine the computational effort expended by the parser proper. For the experiment, the translation process was short-circuited; thus only analysis, not transfer and synthesis, was performed. Summary of the LRC Experiment The LRC experiment involved the production of eight different instrumented systems -- two parsers (left- corner and Cocke-Kasami-Younger), each with all four combinations of two independent strategy variations (top-down filtering and early constituent tests)-- and eight test runs on the identical set of 262 sentences selected pseudo-randemly from three technical texts sup- plied by the MT project sponsor. The sentences con- talned therein may reasonably be expected to constitute a nearly-representative sample of text in that domain, and presumably constitute a somewhat less-representative (but by no means trivial) sample of the types of syntac- tic structures encountered in more general German text. The usual (i.e., complete) analysis procedures for the purpose of subsequent translation were in effect, which includes production of a full syntactic and semantic analysis via phrase-structure rules, feature tests and operations, transformations, and case frames. It was known in advance that not all constructions would be handled by the grammar; further, that for some sentences some or all of the parsers would exhaust the available space before achieving an analysis. The latter problem in particular would indicate differential performance characteristics when working with limited memory. One of the parsers, the version of the CKY parser lacking both top-down filtering and early constituent tests, is Qssentially identical to the CKY parser employed in the SRI experiments. The experiment was conducted on a DEC 2060; the systems were run separately, late at night in order to minimize competition with other programs which could confound the results. The Experimental Results The various parser and strategy combinations were s!igl~tly u-,~ual in their ability to analyze (or, alter- nate~y, de~ ~trate the ungran~naticality of) sentences within the available space. Of the three strategy choi- ces (parser, filtering, constituent tests), filtering constituted the most effective discriminant: the four systems with top-down filtering were 4% more likely to find an interpretation than the four without; but most of this diiference occurred within the systems employing the left-corner parser, where the likelihood was IO% greater. The likelihood of deriving an interpretation at all is a matter that must be considered when contem- plating application on machines with relatively limited address space. The summaries below, however, have been balanced to reflect a situation in which all systems have sufficient space to conclude the analysis effort, so that the comparisons may be drawn on an equal basis. Not surprisingly, the data reveal differences between single strategies and between joint strategies, but the differences are sometimes much larger than one might suppose. Top-down filtering overall reduced the number of phrases by 35%, but when combined with CKY without early constituent tests the difference increased to 46%. In the latter case, top-down filtering increased the overall search space by a factor of 46-- to well over 300,000 nodes per sentence. For the Left-Corner Parser without early constituent tests, the growth rate is much milder -- an increase in search space of less than a factor- of 6 for a 42% reduction in the number of phrases -- but the original (unfiltered)search space was over 3 times as large as that of CKY. CKY overall required 84% fewer CONSes than did LCP (considering the parsers alone); for one matched pair of joint strategies, pure LCP required over twice as much storage as pure CKY. Evaluating the'parsers and strategies via CPU time is a tricky business, for one must define and justify what is to be included. A common practice is to exclude almost everything (e.g., the time spent in storage management, paging, evaluating rule-body procedures, building parse trees, etc.). One commonly employed ideal metric is to count the number of trips through the main parser loops. We argue that such practices are indefensible. For instance, the "pure parse times" measured in this experiment differ by a factor of 3.45 in the worst case, but overall run times vary by 46% at most. But the important point is that if one chose the "best" parser on the basis of pure parse time measured in this experiment, one would have the fourth-best overall system; to choose the best overall system, one must settle for the "sixth-best" parser! Employing the loop- counter metric, we can indeed get a perfect prediction of rank-order via pure parse time based on the inner- loop counters; what is more, a formula can be worked out to.predict the observed pure parse times given the three loop counters. But such predictions have already been shown to be useless.(or worse) in predicting total program runtime. Thus in measuring performance we prefer to include everything one actually pays for in the real computing world: Paging, storage management, building interpretations, etc., as well as parse time. In terms of overall performance, then, top-down filter- ing in general reduced analysis times by 17% (though it increased pure parse times by 58%); LCP was 7% less time-consuming than CKY; and early constituent tests lost by 15% compared to not performing the tests early. As one would expect, the joint strategy LCP with top- down filtering [ON] and Late (i.e. not Early) Constitu- ent Tests [LCT] ranked first among the eight systems. However, due to beneficial interactions the joint strat- egy [LCP ON ECT] (which on intuitive grounds we predict- ed would be most efficient) came in a close second; [CKY ON LCT] came in third. The remainder ranked as follows: [CKY OFF LCT], [LCP OFF LCT], [CRY ON ECT], [CKY OFF ECT], [LCP OFF ECT]. Thus we see that beneficial inter- action with ECT is restricted to [LCP ON]. Two interesting findings are related to sentence length. One, average parse times (however measured) do not exhibit cubic or even polynomial behavior, but instead appear linear. Two, the benefits of top-down filtering are dependent on sentence length; in fact, filtering is detrimental for shorter sentences. Averaging over all other strategies, the break-even point for top-down filtering occurs at about 7 words. (Filtering always increases pure parse time, PPT, because the parser sees it as pure overhead. The benefits are only observable in overall system performance, due primarily to a significant reduction in the time/space spent evaluating rule-body procedures.) With respect to particular strategy combinations, the break-even point comes at about lO words for [LCP LCT], 6 words for [CKY ECT], 6 words for [LCP LCT], and 7 words for [LCP ECT]. The reason for this length dependency becomes rather obvious in retrospect, and suggests why top-down filtering in the SRI follow-up experiment was detrimental: the test sentences were probably too short. DISCUSSION The immediate practical purpose of the SRI experiments was not to stimulate a parser-writing contest, but to determine the comparative merits of parsers in actual use with the particular aim of extablishing a rational basis for choosing one to become the core of a future NLP system. The aim of the LRC experiment was to discover which implementation details are responsible for the observed performance with an eye toward both suggesting and directing future improvements. The SRI Parsers The question of relative efficiency was answered decisively. It would seem that the CKY parser performs better than LIFER due to its much greater speed at find- ing applicable rules, with either the semantic or the linguistic grammar. CKY certainly performs better than DIAMOND for this reason, presumably due to programmar differences since the algorithms are the same. The question of efficiency gains due to top-down filtering remained open since it enhanced one implementation but degraded another. Unfortunately, there is nothing in the data which gets at the underlying reasons for the efficiency of the CKY parser. The LRC Parsers Predictions of performance with respect to all eight systems are identical, if based on their theoretically equivalent search space. The data, however, display some rather dramatic practical differences in search space. LCP's chart search space, for example, is some 25 times that of CKY; CKY's filter search space is al- most 45% greater than that of LCP. Top-down filtering increases search space, hence compute time, in ideal- ized models which bother to take it into account. Even in this experiment, the observed slight reduction in chart and grammar search space due to top-down filter- ing is offset by its enormous search space overhead of over I00,000 nodes for LCP, and over 300,000 nodes for [CKY LCT], for the average sentence. But the overhead is more than made up in practice by the advantages of greater storage efficiency and particularly the reduced rule-body procedure "overhead." The filter search space with late column tests is three times that with early column tests, but again other factors combine to re- verse the advantage. The overhead for filtering in LCP is less than that in CKY. This situation is due to the fact that LCP main- rains a natural left-right ordering of the rule con- stituents in its internal representation, whereas CKY does not and must therefore compute it at run time. (The actual truth is slightly more complicated because CKY stores the grammar in both forms, but this carica- ture illustrates the effect of the differences.) This is balanced somewhat by LCP's greatly increased chart search space; by way of caricature again, LCP is doing some things with its chart that CKY does with its fil- ter. (That is, LCP performs some "filtering" as a natural consequence of its algorithm.) The large vari- ations in the search space data would lead one to ex- pect large differences in performance. This turns out not to be the case, at least not in overall performance. CONCLUSIONS We have seen that theoretical arguments can be quite inaccurate in their predictions when one makes the tran- sition from a worst-case model to an actual, real-world situation. "Order n-cubed" performance does not appear to be realized in practice; what is more, the oft-ne- glected constants of theoretical calculations seem to exert a dominating effect in practical situations. Arguments about relative efficlencles of parsing methods based on idealized models such as inner-loop counters similarly fail to account for relative efficlencies observed in practice. In order to meaningfully describe performance, one must take into account the complete operational context of the Natural Language Processing system, particularly the expenses encountered in storage management and applying rule-body procedures. BIBLIOGRAPHY Aho, A. V., and J. D. Ullman. The Theory of Parsing, Translation, and Compiling, Vol. I. Prentice-Hall, Englewood Cliffs, New Jersey, lg72. Burton, R. R., "Semantic Grammar: ~n engineering technique for constructing natural language understanding systems," BBN Report 3453, Bolt, Beranek, and Newman, Inc., Cambridge, Mass., Dec. 1976. Chester, 0°, "A Parsing Algorithm that Extends Phrases," AJCL 6 (2), April-June 1980, pp.87-g6. Earley, J., "An Efficient Context-free Parsing Algorithm," CACM 13 (2), Feb. IgTO, pp. 94-102. Graham, S. L., M. A. Harrison, and W. L. Ruzzo, "An Improved Context-Free Recognizer," ACM Transactions on Programming Languages and Systems, 2 (3), July 1980, pp. 415-462. Griffiths, T. V., and S. R. Petrick, "On the Relative Efficiencies of Context-free Grammar Recognizers," CACM 8 (.51, May lg65, pp. 289-300. Grosz, B. J., "Focusing in Dialog," Proceedings of Theoretical Issues in Natural Language Processlng-2: An Interdisciplinary Workshop, University of Illinois at Urbana-Champaign, 25-27 July 1978, Hendrix, G. G., "Human Engineering for Applied Natural Language Processing," Proceedings of the 5th International Conference on Artificial Intelligence, Cambridge, Mass., Aug. 1977. Hendrix, 6. G., E. 0. Sacerdoti, D. Sagalowicz, and J. Slocum, "Developlng a Natural Language Interface to Complex Data," ACM Transactions on Database Systems, 3 {21, June 1978, pp. 105-147. Lehmenn, W. P., g. S. Bennett, J. Slocum, et el., "The METAL System," Final Technlcal Report RAOC-TR-80-374. Rdme Air Development Center, Grifflss AFB, New York, Jan. Ig81. Available from NTIS. Paxton, W. U., "A Framework for Speech Understanding, ~ Teoh. Note 142, AS Center, SRI International, Menlo Park, Callf., June 1977. Pratt, V. R., "LINGOL: A'progress report," Proceedings of the Fourth International Joint Conference on Artificial Intelligence, l'oilisi, Georgia, USSR, 3-8 Sept. 1275, pp. 422-428. Robinson, J J., "DIAGRAM: A grammar for dialogues," Tecb. Note 205, AI Center, SRI International, Menlo Park, Calif., Feb. 1980. Sacerdoti, E. 0., "Language Access to Distributed Data with Error Recovery," Proceedings of the Fifth International Joint Conference on Artificial Intalligience, Cambridge, Mass., Aug. 1977. Slocum, J., An Experiment in Machine Translation," Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics, Philadelphia, 19-12 June Ig80, pp. 163-167. Walker, D. E. Cad.). Understanding Spoken Language. North-Holland, New York, 1978. Woods, W. A., "Syntax, Semantics, and Speech," BBN Report 3067, Bolt, Beranek, and Newman, Inc., Cambridge, Mass., Apr. 1975. 6
1981
1
What Makes Evaluation Hard? 1.0 THE GOAL OF EVALUATION Ideally, an evaluation technique should describe an algorithm that an evaluator could use that would result in a score or a vector of scores that depict the level of performance of the natural language system under test. The scores should mirror the subjective evaluation of the system that a qualified judge would make. The evaluation technique should yield consistent scores for multiple tests of one system, and the scores for several systems should serve as a means for comparison among systems. Unfortunately, there is no such evaluation technique for natural language understanding systems. In the following sections, I will attempt to highlight some of the difficulties 2.0 PERSPECTIVE OF THE EVALUATION The first problem is to determine who the "qualified judge" is whose judgements are to be modeled by the evaluation. One view is that he be an expert in language understanding. As such, his primary interest would be in the linguistic and conceptual coverage of the system. He may attach the greatest weight to the coverage of constructions and concepts which he knows to be difficult to include in a computer program. Another view of the judge is that he is a user of the system. His primary interest is in whether the system can understand him well enough to satisfy his needs. This Judge will put greatest weight on the system's ability to handle his most critical linguistic and conceptual requirements: those used most frequently and those which occur infrequently but must be satisfied. This judge will also want to compare the natural language system to other technologies. Furthermore, he may attach strong weight to systems which can be learned quickly, or whose use may be easily remembered, or which takes time to learn but provides the user with considerable power once it is learned. The characteristics of the judge are not an impediment to evaluation, but if the characteristics are not clearly understood, the meaning of the results will be confused. 3.0 TESTING WXTH USERS 3.1 Who Are The Users? It is surprising to think that natural language research has existed as long as it has and that the statement of the goals is still as vague as it is. In particular, little commitment is made on what kind of user a natural language understanding system is intended to serve. In particular, little is specified about what the users know about the domain and the language understanding system. The taxonomy below is presented as Harry Tennant PO Box 225621, M/S 371 Texas Instruments, Inc. Dallas, Texas 75265 an example of user characteristics based on what the user knows about the domain and the system. Classes of Users of database query systems V Familiar with the database and its software IV Familiar with the database and the interaction language Ill Familiar with the contents of database II Familiar with the domain of application I Passing knowledge of the domain of application Of course, as users gain experience with a system, they will continually attempt to adapt to its quirks. If the purpose of the evaluation is to demonstrate that the natural language understanding system is merely useable, adaptation resents no problem. However, if natural language is being used to allow the user to express himself in his accustomed manner, adaptation does become important. Again, the goals of natural language systems have been left vague. Are natural language systems to be i) immediately useful, 2) easily learned 3) highly expressive or 4) readily remembered through periods of disuse? The evaluation should attempt to test for these goals specifically, and must control for factors such as adaptation. What a user knows (either through instruction or experience) about the domain, the database and the interaction language have a significant effect on how he will express himself. Database query systems usually expect a certain level of use of domain or database specific jargon, and familiarity with constructions that are characteristic of the domain. A system may perform well for class IV users with queries like, i) What are the NORMU for AAFs in 71 by month? However, it may fare poorly for class I users with queries like, 2) I need to find the length of time that the attack planes could not be flown in 1971 because they were undergoing maintenance. Exclude all preventative maintenance, and give me totals for each plane for each month. 3.2 What Does Success Rate Mean? A common method for generating data against which to test a system is to have users use it, then calculate how successful the system was at satisfying user needs. If the evaluation attempts to calculate the fraction of questions that the system understood, it is important to characterize how difficult the queries were to understand. For example, twelve queries of the form, 37 3) How many hours of down time did plane 3 have in January, 1971 4) How many hours of down time did plane 3 have in February, 1971 will h~ip the success rate more than one query like, 5) How many hours of down time did plane 3 have in each month of 1971, However, ~ne query like 5 returns as much information as the other twelve. In testing PLANES (Tennant, 1981], the users whose questions were understood with the highest rates of success actually had less success at solving the problems they were trying to solve. They spent much of their time asking many easy, repetitive questions and so did not have time to attempt some of the problems. Other users who asked more compact questions had plenty of time to hammer away at the queries that the system had the greatest difficulty understanding. Another difficulty with success rate measurement is the characteristics of the problems given to users compared to the kind of problems anticipate~ by the system. I once asked a set of users to write some problems for other users to attempt to solve using PLANES. The problem authors were familiar with the general domain of discourse of pLANES, but did not have any experience using it. The problems they devised were ~easonable given the domain, but were largely beyond the scope of PLANES ~ conceptual coverage. Users had very low success rates when attempting to solve these problems. In contrast, problems that I had devised, fully aware of pLANES ~ areas of most complete Coverage (and devised to be easy for PLANES}, yielded much higher success rates. Small wonder. The point is that unless the match between the problems and a system's conceptual coverage can be characterlsed, success ~ates mean little. 4°0 TAXONOMY OF CAPABILITIES Testing a natural language system for its performance with with users is an engineering approach. Another approach is to compare the elements that are known to be involved in understanding language against the capabilities of the system. This has been called "sharpshooting" by some of the implementers of natural language systems. An evaluator probes the system under test to find conditions under which it fails. To make this an organized approach, the evaluator should base his probes on a taxonomy of phenomena that are relevant to language understanding. A standard taxonomy could be developed for doing evaluations. Our knowledge of language is incomplete at best. Any taxonomy is bound to generate disagreement. However, it seems that most of the disagreements describing language are not over what the phenomena of language are, but over how we might best understand and model those phenomena. The taxonomy will become quite large, but this is only representative of the fact that understanding language is a very complex process. The taxonomy approach faces the problem of complexity directly. The taxonomy approach to evaluation forces examination of the broad range of issues of natural language processing. It provides a relatively objective means for assessing the full range of capabilities of a natural language understanding system. It also avoids the problems listed above inherent in evaluation through user testing. It does, however, have some unpleasant attributes. First, it does not provide an easy basis for comparison of systems. Ideally an evaluation would produce a metric to allow one to say "system A is better than system B". Appealing as it is, natural language understanding is probably too complex for a simple metric to be meaningful. Second, the taxonomy approach does not provide a means for comparison of natural language understanding to other technologies. That comparison can be done rather well with user testing, however. Third, the taxonomy approach ignores the relative importance of phenomena and the interaction between phenomena and domains of discourse. In response to this difficulty, an evaluation should include the analysis of a simulated natural language system. The simulated system would consist of a htnnan Interprete~ who acts as an intermediary between users and the programs or data they are trying to use. Dialogs are recorded, then those dialogs are analyzed in light of the taxonomies of features. In this way, the capabilities of the system can be compared to the needs of the users. The relative importance of phenomena can be determined this way. Furthermore, users" language can be studied without them adapting to the system's limitations. The ~axonomy of phenomena mentioned above is intended to Include both lingulstlc phenomena and concepts. The linguistic phenomena relate to how ideas may be understood. There is an extensive literature on this. The concepts are the ideas which must be understood. This is much more extensive, and much more domain specific. Work in knowledge representation is partially focused on learning what concepts need to be represented, then attempting to represent them. Consequently, ther~ is a taxonomy of concepts implicit in the knowledge representation literature. Reference Tennant, Harry. Evaluation of Natural Language processors. Ph.D. Thesis, University of Illinois, Urbana, Illiniois, 1981. 38
1981
10
EVALUATION OF NATURAL LANGUAGE INTERFACES TO DATA BASE SYSTEMS Bozena Henisz Thompson California Institute of Technology INTEODUCT~ON Is evaluation, like beauty, in the eye of the beholder? The answer is far from simple because it depends on who is considered to be the proper beholder. Evaluacors may range from casual users to society as a whole, with sys- tem builders, sophisticated users, linguists, grant pro- viders, system buyers, and others in between. The members of thls panel are system builders and linguists -- or rather the t~ao fused into one -- but, I believe, interested in all or almost all actual or potential bodies of evaluators. One of our colleagues expressed a forceful opinion while being a member of a similar panel at last year's ACL conference: "Those of us on this panel and other researchers in the field simply don't have the right to determine whether a system is practi- cal. Only the users of such a system can make Chat determination. Only a user can decide whether the hi. [natural language] capability constitutes sufficient added value to be deemed practical Only a user can decide if the system's frequency of inappropriate response is sufficiently low to be deemed practical. Only a user can decide whether the overall NL interac- tion, taken in toto, offers enough benefits over alter- native formal interactions to be deemed practical" Ill. It is hard for me co disagree, since I argued as force- fully on the basis of my study of users* evaluation of machine translation [2] -- a study which was prompted by the evaluations of the quality of machine translation as viewed by linguists and users, ranging from 35Z accept- able for the former to 90Z for the latter. Whet the study also showed was chat the practicality of the out- put could indeed only be judged by the users, since even incomplete and stylistically very inelegant translations were found quite useful in practice because they, on the one hand, provided, however crudely, the information sought by the users, and, on the other hand, the users themselves brought knowledge chat made the texts far more understandable and useful then might appear co a nonspecialist linguist. But this endorsement on mY pert of the user a~ the ultimate judge in evaluations does not preclude my fully subscribing co Norm Sondheimer's [3] introductory co~ents co this panel stating that to "make progress as a field, we need to be able Co evalu- ate." We are now less likely co confuse the issue of the evaluation by people like ourselves and the judgment of the users, less likely to be surprised at the discrepan- cies, and less likely to be surprised at the users" acceptance of the limitations of our NL interfaces. Also, we are far more aware of the fact chac evaluations of '~orth" or "quality" have Co be conducted in the con- texts of the actual, perceived needs. Zn extensive stu- dies on evaluation of innovations, Mosteller [4], the recently retired president of AAAS, found that "success- ful innovators better understand user needs; [and] pay more attention to marketing .... " The same source, however, leads me co the notorious difficulties of evaluation given the vide range of evaluaCors and their purposes. We are all undoubtedly convinced of the value of NLI for the society as a whole, but the evaluation of experiments with these interfaces is another matter. Mosceller was faced with social, sociomedical, and medi- cal fields. Let me recount some of the studies he and his team made for reasons which will soon become obvi- ous. His teem scored a given program on a scale from plus ~wo Co minus ~wo with zero meaning there was essen- tially uo gain. Accordingly, a study of delinquent girls that identified th ~- buc failed to prevent them from delinquency received a zero. Likewise, a zero was assigned Co a probation experiment for conviction for public drunkenness in which three methods were used: (I) no treatment, (2) an alcoholic clinic, and (3) Alcoholics Anonymous. Since the "no treatment" group performed somewhat better, short-term referrals were considered of no value. A minus one was given to a study whose results were opposite co those hoped for: a major insurance cOmpany increased outpatient benefits in the hope of decreasing hospital costs, but the outpa- tient group's hospital stays increased. Finally, a dou- ble plus was swarded to an experiment involving the Salk vaccine, which was, predictably, very successful. Now this kind of evaluation may be justified when the needs of the society are at stake. I have gone into these details, however, for the purpose of expressing the opinion, in which I know I'm not alone, that nelative results are as important as positive ones, that evalua- tion in our case is almost equivalent to the amount of information obtained in an experiment. An experiment whose results would be totally predictable would be almost useless, but one with results different frOm those hoped for might be embarrassing but very valuable. Another c~ent prompted by those evaluations is chat the application of any rigid, fine scale is totally inappropriate in the case of NLI evaluations. NLI EVALUATIONS A. METHODOLOGY AND SOME RESULTS It had been widely taken for granted some time ago Chat l~LI is as good as is its gr-~-r, and a grammar is as good as it is extensive. The specific needs of users, the requirements of special tasks and the like cook a back seat. The nature of ht--an discourse was yet to be explored. Happily, we have been in a different situa- tion for some time. When the REL [5, 5, 7] system was getting into • reasonably sturdy shape with respect to speed and buss, I started planning experiments to test it. There yes important literature about discourse, especially in sociology, such as the work of Schegloff. It was thus clear that successful NLI experiments had Co be based on knowledge of hi, an discourse. St was also clear chat that was the way Co make the interface more natural. This ass~ption has already been fruitful: the NL interface in POL [9], a successor Co REL, has already been extensively improved as a result of the EEL-related experiments. Experiments were made in three modes: in addition to face-to-face and human-to-co~puter, cerainal-co-terminal communication was examined, since at present chat is the only practical mode of accessing the computer. Through early 1980, Over 80 subjects, 80,000 words, and over 50 hours were analyzed in great detail. In the fall of 1980, another 13 subjects were tested in the computa- tional mode only, adding approximately 20 hours. From the start, the experiments were encouraging, although limited to ~wo modes: F-F and T-T. Interactions not only showed a great deal of structure but extensive similarities in both modes, the most important being the constancy of the nt=aber of words in sentences (about 70Z); the length of sentences (about 7 words); the existence of fragments (70Z of messages in F-F and 50Z in T-T containing them); and phatics (10Z of total for F-F and 5Z for T-T). Thus similarities between the =odes were a candidate for consideration in experiments in the computational mode, the T-T mode being seemingly quite far removed from natural F-F. The sentence having historically been the unit of analysis (and since phat- its were considered of lesser Lmportance from the compu- tational vi~, although of great interest in general), m 7 attention turned Co fragments. REL allowed for three non-sentence type structures: "NP?" (including number parsed into NP); "all/none or uomber" answers; and 39 definitions introducible by the user which make ic pos- sible to include individual knowledge and terminology. The analysis of F-F and T-T protocols, however, showed the existence of other fragment categories, finally analyzed ~nco a dozen categories (see [8]). Since they constitute a considerable amount of F-F conversations and even T-T protocols, they clearly had co be watched for in computational experiments. The experiments for actually observin~ user-system interaction were conducted in the winter Cem of 1979/80 and produced 21 protocols, the analysis of which was compared with results of eight F-F and fou~ T-T experi- ments. Another 13 computational experiments done in the fall coufimed the results of the earlier ones. The Cask in all three =odes was a real one: loading cargo onto a ship, the data coming from the actual envirooment of loading U.S. navy ships by a group in San Diego, Cal- ifornia. In the F-F and T-T experiments, ~n,~o persons were involved -- one given cargo item~ Co be loaded, the other infot~nation about decks (details in [8]). In the computational mode (H-C) the ship data was in ~he com- puter and the list of cargo to be loaded was handed Co the subjects, all with Caltech background. Details being available elsewhere andspace limited here, only some major results are given here. Table 1 shows the comparison of the three modes. TABLE 1 ~-__~ T-__/~ .-c Sentence length 6.8 6.I 7.8 Message length 9.5 10.3 7.0 Frequent length 2.7 2.8 2.8 Z words in sentences 68.8 72.8 89.3 Z words in fragments 17.2 21.1 10.7 Toca~ AvR. ~ota~ Avt. ToCa~ Ave, Messages 5574 697 310 78 1093 52 Parsed & nonparsed 1615 77 Sentences 5302 663 385 77 882 42 Fragments 3253 402 230 58 211 10 Phatics (including connectors & tags) 48A2 605 148 37 46 2 Total ~ota[ Total Words in messages 49800 3285 8525 Words in sentences 34266 2393 6880 Words in fra~encs 8584 694 823 As can be seen, several statistics show siailaritias: sentence length, message length, fragment length, per- centage of words in sentences and fragments. The close- ness of the average of messages in T-T and parsed and uonparsed inputs in H--C is striking. Table 2 (the meaning of abbreviations is given below the cable) deals with fragments. Zt is mostly self- explanatory, as is the absence of dsfiniclons from ¥-F and T-T (although some abbreviations used there fall in this category) and the absence of some other categories from T-T and K-C. At lease ~wo comaents, however, are necessary. The surprisingly low use of terse questions £n H-C may be accounted for by the tendency toward a formal style in compuCacionnl interaction. The defini- tions used were often of quite complex character, although far fever than could be hoped for due apparently to lack of familiarity with this capability. The complex character of definitions undoubtedly had some effect on the length of sentences in the H-C mode. d TABLE 2 F-F T-T H-C Tota~ ~l TOCa ~ ; TOCa t g 532 £6.4 10 4.3 ADD 425 13. I 41 17.8 CORE 56 1 • 7 COMP 95 2.9 2 .9 SELF I14 3.5 T~ 571 17.6 67 29.1 TQ 4li 12.5 31 13.4 TI 297 9 . 1 48 20 . 9 FS 413 12.7 23 I0.0 TEUN 339 I0.4 9 3.9 DrY p 4~2 148 C 1935 34 T 31 91 37 o8 67 27.8 , 30 12,4 53 22.0 Abbreviations E (Echo): An ezacc or partial repetition of usually the other speaker's string. Often an NP, but it may be an elliptical structure of various forms. ADD (Added ~nformatiou): An elliptical structure, often NP, used to clarif 7 or complete a previous utterance, often ode" s own, e.g., "IC doesn" ~: say anything here about weight, or breaking chins, down. Except for orushablee.", "It's smaller. 36"x20"x17"." Spelling out words was Lncluded here. CORE (Correction): This may be done by either speaker. Tf done by the smm speaker it is related Co false start, but semantic considerations suggest a correction, e.g., "Those are 30, ,,h, 48 length by 40 width by 14 height." COMP (ComoleCion): Completion of the other speaker's utterance, distinguished from interruption by the cooperative nature of the utterance, e.g., "As T've got a lot of...Z've toe B: two pages. A: Yeah." SZLY.(Ta~kin S co 0ueself~: Muttsrings, even to the point of undecipherabiliCy, noc intended for the other person. TR (Terse reply): An elliptical reply, often NP, e.go, "No.", "Probably meters.", "50 and 7.62." TQ (Terse OuesCion) : An elliptical question, often NP, e.g., '~hy?", "How about pyrotechnics?", '~hich ones?" TI (Terse Information): A rather elusive category, neither question, reply nor co--and, an elliptical statement but one often requiring an action. F8 (False Sta~c): These are also abandoned utter- ances, but i~edistely followed by usually syntac- tically and semantically related ones, e.g., "They may, they may be identical classes.", '~ell, the height, the next largest height I've got is 34." TRUN (Truncated.): An incomplete utterance, voluntarily abandoned. DEF (Definition): E.g., '~0efine: ED: each deck of the Almeo." P (Phatics): The largest subgroup of fragments whose nets is borrowed from Malinoweki °s tern "phacic colmtmion" with which he referred to chose vocal utterances chat serve to establish social relations racher than the direct purpose of communication. This term has been broadened to include all frag- ments which help keep the channel of communication open, such as '~ell", '~aic", and even '~ou Cur- kay". Two subcategories of phacics are: C (Dialogue Connectors) : Words such as "Then", "And", "Because" (at the beginning of a message or utterance). T (Tan Ouescions): E.g., "They're all under 60, seen" t they?" 40 B. SYST~4 PERFORMANCE, sYNTAX USED, SPECIAL STRATEGIES, AND ERROR ANALYSIS System performance can obviously be evaluated in a number of ways, but without good response time meaning- ful experiments are impossible. When much data is involved in processing a delay of a few minutes can probably be tolerated, but the vast majority of requests should be responded to within seconds. The latter was the case in my experiments. Fairly complex messages of about 12 words were responded to in about l0 seconds. The system clearly has to be reasonably free of bugs -- in my case, 12 bugs were hit in the total of 1615 parsed and nonparsed messages. The adequate extent of natural language syntax is impossible to determine. Table 3 shows the syntax used by my subjects. sentences; or possibly just "baby talk" due to the suspicion of the computer's limitations. An interesting fact to note is that similar results with respect to syntax were obtained in the exper~nents with USL, the "sister system" of REL developed by IBM Heidel- berg [10] -- with German used as gLl in two studies of high school students: predominance of wh-questions (317 in total of 451); not many relative clauses (66); com- mands (35); conjunctions (26); quantifiers (15); defini- tions (ii); comparisons (2); yes/no questions (i). An evaluation which would not include an analysis of unparsed input would at best be of limited value. It was shown in Table i that i093 out of 1515 or about ~o thirds were parsed in my experiments. TABLE 3 SENTENCE TYPES Tot~l 882 651 All sentences Simple sentences, e.g., "List the decks of the Alamo." 73.8 Sentences with pronouns, e.g., '~/hat is its length?", "what is in its pyro- technic looker?" 30 3.A Sentences with quantifier(s), e.g., "List the class of each cargo." 71 8.0 Sentences with conjunctions, e.g. "What is the maxim,-- stow height and bale cube of the pyrotechnic locker of the AL?" 88 I0.0 Sentences with quantifier and conjunc- tion(s), e.g., "List hatch width and hatch length of each deck of the Alamo." 13 2.6 Sentences with relative clause, e.g., "List the ships that have water." 6 .7 Sentences with relative clause (or related construction) and cemparator, e.g., "List the ships with a beam less than lO00." 6 .7 Sentences with quantifier and relative clause, e.g., "List height of each content whose class is class IV." 2 .23 Sentences with quantifier, conjunction and relative clause, e.g., "List length, width and height of each content whose class is a--nunicion." 2 .23 Sentences with quantifiers and comparator, e.g., '~Iow many ships have a beam greater than 10007'* 3 .34 Wh-questions 75.0 Yes/no questions 1.0 Con=sands 19.0 Statements (data addition) 5.0 Considering the wide range of R k'r- syntax [7], the pau- city of complex sentences is surprising. The use of definitions which often involved complex constructions (relative clauses, conjunctions, even quantifiers) had a definite influence. So did, undoubtedly, the task situation causing optimization of work methods. The influence of the specific nature of the task would require additional studies, but the special device pro- vided by the system (a loading prompt sequence -- which was not analyzed) was employed by every subject. Dew- ices such as these obviously are a great aid in accom- plishin 8 tasks. They should be tested extensively to determine how they can augment the uaturalness of NLIs. Other reasons for the relatively simple syntax used were special strategies: paraphrasing into simpler syntax even though a sentence did not parse for other reasons; "SUCCesS strategy" resulting in repetitious simple TABLE 4 Total % Vocabulary 161 36.1 Punctuation 72 16.1 Syntax 62 13.9 Spelling 61 13.6 Transmission 32 7.2 Definition format 30 6.7 Lack of response 16 3.6 Bus 12 2.7 Table 4 st~_erizes the categories of errors. The predominance of vocabulary is not surprising, but rela- tively few syntactic errors are. In part this may be due to the method of scoring in which errors were counted only once, so if a sentence contained an unknown vocabulary item (e.g. "On what decks of the Alamo cargo be stored?") but would have failed on syatactic grounds as well, it would fall in the vocabulary category. A comparison can be made here with Damerau's study Ill] of the use of the ll~A system by the city plannin S department in White Plains, at least with regard to the total of queries to those completed: 788 to 513. So, again, roughly t~ao thirds were parsed. In other categories "parsin S failure" is 147, "lookup failures" 119, "nothing in data base" 61, "program error" 39, but this only points to the general difficul- ties of comparisons of system performance. SOME CONCLUSIONS Norm Sondheimer suggested some questions we might try to answer. What has been learned about user needs? What most important linguistic phenomena to allOW for? What other kinds of interactions? Error analysis points in the obvious directions of user needs, and so do the types of sentences employed. While it is justified to quit the search for an almost perfect grnmm,r, it would be a mistake to constrain it to the constructions used. Improved naturalness can be achieved with diagnostics, definitions, and devices geared to specific tasks such as special prompting sequences. Some tasks clearly require math in the NLI. How good are systems? An objective measurement is probably impossible, but the percentage of requests processed might give some idea. In the case of a task situation such as loading cargo items, the percentage of task completion may signal both system performance and user satisfaction. System response times are a very important measure. The ques- tionnaire method can and has been used (in the case of MT and USL), but as yet there is too little experience to measure user satisfaction. Users seem very good at adapting to systems. They paraphrase, use success stra- tegy, simplify syntax, use special devices -- what they really do is maximize their performance with respect Co a given task. 41 What have we learned about running evaluations7 It is important Co know what to look for, therefore the need for good knowledge of human to hmnan discourse. Good system response times are a sine qua non. Controlled experiments have the advantage of being replicable, a crucial factor in arriving ac evaluation criteria. Determining user bias and experience nay be important, but even more so £s user training. Controlled experi- ments can show what methods are ~ost effective (e.g. a manual or study of proCocols~). Study of user commence -- phacic material -- gives some measure of user (dis)satisfaction (I have seen '"/ou lie," buc I have yeC to see "Good boy, youZ"). Clearly, the best indication of user satisfaction is whether he or she uses the sys- tem again. Extensive IonS-term studies are needed for that. What should the future look like? Task oriented situa- tions seem to be a promising envirooment for ~LZ. The standards of NL systems performance will be set by the users. Future evaluations? As Antoine de Sainc-Zxup&r7 wrote, "As for the Future, your task is not to foresee, but to enable it." REFERENCES i. Harris, Larry E. "Prospects of Practical Natural Language Systems." Proceedings of the 18th Annual Meetin~ of the Association for Computationa~ Linguistics, June 1980, p. 129. Z.. Henisz-DosterC, B.; Macdonald, R. E.; and Zarech- rusk, M. Machine Translation. The Hague: Mouton, 1979. 3. Sondheimer, N. K. "Evaluation of Natural Language Interfaces to Data Base Systems." Proceedings o( the 19th Annual Meecin~ of the Association for Com- putational Linguistics, June 1981. 4. Mosteller, F. "~nnovation and Evaluation." Science (February 27, 1981):881-886. 5. Thompson, F. B. and Thompson, Boaena H. "?tactical Natural Language Processing: The EEL System as Prototype." In Advances in Computers, ed. M. Rubi- noff and M. C. Yovits. Yol. 13. New York: Academic Press, 1975. 6. Thompson, BozenaH. and Thompson, F. B. "Rapidly Extendable Natural Language." Proceedings of the 1978 Nationa~ Conference of the ACM, pp. 173-182. 7. Thompson, Bozena H. REL English for the User. Pasadena: California Institute of Technology, 1978. 8. Thompson, Bozena H. "Linguistic Analysis of Natural Language Co--,unication rich Computers." COLING 80: Proceedings of the gCh Internationa~ Conference on Computariona~ Linguistics, Tokyo, October 1980, pp. 190-201. 9. Thompson, Bozeua H. and Thompson, F.B. "Shifting to a Higher Gear in a Hatural Language System." Proceedinzs of the Nat~ona~ Computer Conference, May 1981. 10. Lehmann, Hubert; OCt, Nikolaue; Zoeppri~z, Mag- dalene. '~ser Experiments with Natural Language for DaCe Base Access." COLING 78: ProceedinRs of ch~ 7oh International Conference on Computational Linguistics. Bergen, August 1978. Ii. Oamtrau, Fred J. The Transformational ~uestion Answ~rin~ ~T~A~ System: Operational Statistics - 1978. EC 7739. Yorktown Heights: IBM T. J. Watson research Center, June 1979. 42
1981
11
TWO DISCOURSE GENERATORS William C. Mann USC Information Sciences Institute WHAT IS DISCOURSE GENERATION? The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text. Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others. For Artificial Intelligence, discourse generation is an unsolved problem. There have been only token efforts to date, and no one has addressed the whole problem. Still, those efforts reveal the nature of the task, what makes it diffic;,It and how the complexities can be controlled. In comparing two AI discourse generators here we can do no more than suggest opportunities and attractive options for future exploration. Hopefully we can convey the benefits of hindsight without too much detailed description of the individual systems. We describe them only in terms of a few of the techniques which they employ, partly because these tschnk:lUes seem more vaJuable than the system designs in which they happen to have been used. THE TWO SYSTEMS The systems which we study here are PROTEUS, by Anthony Davey at Edinburgh [Davey 79], and KDS by Mann and Moore at ISI [Mann and Moore 801. As we will see, each is severely limited and idiosyncratic in scope and technique. Comparison of their individual skills reveals some technical opportunities. Why do we study these systems rather then others? Both of them represent recent developments, in Davey's case, recently published. Neither of them has the appearance of following a hand-drawn map or some' other humanly-produced sequential presentation. Thus their performance represents capabilities of the programs more than cs4)abilities of the programmer. Also, they are relatively unfamiliar to the AI audience. Perhaps most importantly, they have written some of the best machine-produced discourse of the existing art. Rrst we identify particular techniclues in each system which contribute strongly to the quality of the resulting text. Then we compare the two Systems discussing their common failings and the possibilities for creating a system having the best of both. DAVEY'S PROTEUS PROTEUS creates commentary on games of tic.tac-toe (noughts and crosses.) Despite the apparent simplicity of this task, the possibilities of producing text are rich and diverse. (See the example in Appendix .) The commentary is intended both to convey the game (except for insignificant variations of rotation and reflection), and also to convey the significance of each move, including showing errors and missed opportunities. PROTEUS can be construed as consisting of three 13rincipal processors, as shown in Figure 1. Move characterization employs a ranked set of move generators, each identified as defensive or offensive, and each identified further with a named tactic such as blocking, forking or completing a win. A move is characterized as being a use of the tactic which is associated with the highest-ranked move generator which can generate that move in the present situation• The purpose of move characterizaiton is to intefl:ret the facts so that they become significant to the reader. (Implicitly, the system embodies a theory of the significance of facts.) Game Transcript in I i i=. ASTO=O l I Move AN0 I' 1 ! I C)~'TP..~MtNATION I I SENTENCE GENERATION Commentary out Figure 1 : Principal Processors of PROTEUS Contrast arises between certain time-adiacent moves and also between an actual move and alternative possibilities at the same point. For example: • Best move VS. Actual move: The move generators are used to compute the "best" move, which is compared to the actual one. If the move generator for the best move has higher rank than any generator proposing the actual move, then the actual move is treated as s mistake, putting the best move and the actual move in contrast. .Threat VS. Block: A threat contrasts with an immediately following block. This contrast is a fixedreflex of the system. It seems accedteble to mark any goal pursuit followed by blocking of the goaJ as contrastive. Sentence scope is determined by several heuristic rules including I. Express as many contrasts as possible explicitly. (This leeds to immediate selection of words such as "but" and "however".) 43 2. Limit sentences to ,3 clauses. 3. Put as many clauses in a sentence as possible. 4. Expmas only the worst of several mistakes. The main clause struotum is built before entering the grammar, Both the move characterization process and the use of contrasts as the principal ~ of sentence scope contribute a great deal to the quality of the resuRing text. However, Davey's central concern was not with these two 9rocessos but with the third one, sentence generation. His system includes an elaborate Systemic Grammar, which he de,scribes in datall in [Devey 79]. The grammar draws on work of Halliday [Halliday 76], Hudson [Hudson 71], Winograd [Winograd 72], Sinctalr [Sinclair 72], i~uddleston [Huddlaston 71] and F_ K. Brown, following H,..,d_~_n most closely.1 Hudson's work offers a number of significant advantages to anyone comddering implementing a discoume generation system. I. ComDrehensivaness- Its coverage of English is more extensive than comparable work. 2. Explicitness. the rules are spelled out in full in formal notation. 3. Unity. Since the grammar is defined in a single pubilcalion with a single 8uthomhiD, the is*ups of compatibility Of parts are minimized, It is intemsUng that Oevey does not employ the Systemk: Grlmm~lr dehvstJon rules at the highest level Although the grammer is defined in terms of the generation of sentences, Devoy entem it at the clause level with 8 sents~cs desc~Dtlon whi¢;h conforms to Systemic Grammar but was built by other means. A sentence st this level is temporal principally of Ctl-_,,~__. but the surface conjunotlens have already been chosen. Although Oavey real(as no claim, this may redrasent a gener~d result about text generation systems. Above some level of al:atnm~on in the text planning proces~ planning is not conditioned by the content of the grammar. The obvious place to exbeot planning tO become indegendertt of the grammar is at the sentence I~. But in both PROTEUS and KD.~ Operations independent of the grammar extend down to the level of independent clm within sentences. Top leve~ coniunctlons am not within such ci~,~__; so they are determined by Dlenning p r ~ before the grammar is enter~l. It would be extremely awkward to implement Oavey'$ sentence s¢obe heuristics in a syetamic grammar. The formalism is not well suited for oDer~tion~ such as maximizing the total number of explicit contrastive (dements. However, the problem is not just a i~rololem with the formalism; grammars generally do not deal with this sort of operations, and so are ~oorly equil~ped to do so. them i~ no need to "rw~nm" it. G~mo~ ~ dlvid~d imo ~ Id~ ~ &¢~Iv~. mle-ll3t~lca~nL A sylmlm of choi¢.e~ (surJ1 u t~e ch~¢~ i~lt~men d ~ -ind "~mm,e" kT,~,-,~ de.minim *,vh~J~ cm "ai~-ttve') is mech~ ~ other cboP~a ~d w is ¢ondi~m¢. but cny ch~¢e. ~¢e mechU, ".. ,.m,~m~rair,~L . Ru~ S~lUenc~ ~ femunl-emL ejcn ~ tl~ "We~l." ~ml¢~ enet~le ieed¢~ m,l~¢ltu~ enl n~ tm'efenemmnL Although the computer scientist who tries to learn from [Oavey 79} will find that it presents difficulties, the underlying system is interesting enough to be worth the trouble. Devey's imDiementation generally allam~s to be orthodox, conforming to [Hudson 71]. Davey regularizes some of the rules toward type uniformity, and thus reduces the apparent correspondence to Hudson's formulabons. However, the linguistic babe does not appear to have been compromised by the implementation. One of the major strengths of the work is that it takes advantage of s comprehenal~, explicit and linguistically justified grammar. Text quality is also enhanced by some simple filtering (of what will be expressed) based on demmdencies between known facts. Some facts dominate otherJ in the choice of what tO Say. If them is only one move on the board having a certain significance, say "threat", then the move is described by its significance alone, e.g. "you threatened me" without location informatic, n, since the reader can infer the locations. Similarly, only the most significant defensive and offensive aspects of a move ate described even though all are known. The resulting text is divn) and of good quality. Although them ere awlo~mrdn __es,~__~ the immense advantage conferred by using a sophisticated grammar prevails. MANN AND MOORE'S KDS Major Modules of KDS SOace precJudes a thocou0h description of KDS, but fuller deecriptione are mml~ie [Mann and Moore 80], [Mann 79], [Moore 7% KDS consists Of five me~r modules, as indicated in Figure 2. A Frl~lmentM is re~oonalble for eXtnL~ing the relevant knowledge from the notation given to it and dividing that knowledge into small exl:nmalble units, which we call fragments or pmtosentance¢ A Prod=~m Solver, a goal-Oumuit engine in the AI tradition, is responsible for seeotlng the I~eUntmlm~d style of me text and ~ for iml~l~ng the grol8 ol~glmlze~Ion onto the text accordlng to m8~ style. A Knowk~ge Rater removes protasentencas that need not be expressed because they would be redundant to the medsr. KDS MODULf:S MODULE RESPONSIBILITIES i • ExtmcHon of knowledge FRAGMENTER from exteffmJ notation • D;v~sion ;ere express;bJe clauses "P'ROBLEM SOLVER • Style se|ection • Gross erc, l'cm;zat|on of text KNOWLEDGE FILTER * Cogn;tive redundancy removal HILL CLIMBER * Composltion of concepts • Sentence quaJ;~ seek|n~ e Fermi text ~'m=tion SURFACE SENTENCE MAKER Figure 2: KDS Module Resgonsibilltiss The I~est and moat interesting r~__,_,~e is the Hill Climber, which has three raspon~billtis¢ tO compose complex i:rotoasntences from simpM one~ tO judge relative quality among the units resulting from compo~dtton, and to repeatedly improve the set of protosentencas on the Ioasm of those judgments so thM it is of the highest eyeful quality. Finally. s very simple Surface Sentence Maker cremes the sentences of me final text out of protoaec~lmc~. 44 The data flow of these modules can be thought of as a simple pipeline, each module processing the relevant knowledge in turn. The principal contributors to the quality of the output text are: 1. The Fragment and Compose Paradigm: The information which will be expressed is first broken down into an unorganized collection of subsententiai (¢oproximstely clause-level) propositional fragments. Each fragment is crested by methods which guarantee that it is expressible by a sentence (usually a very short one, This makes it possible to organize the remainder of the processing so that the text production problen~ is treated as an improvement problem rather than as a search for feasible solutions, a significant advantage.) The fragments are then organized and combined in the remaining processing. 2. Aggregation Rules: Clause-combining patterns of English are represented in a distinct set of rules. The rules specify transactions on the set of propositional fragments and previous aggregation results. In each transection several fragments are extracted and an aggregate structure (capable of representation as a sentence) is inserted. A representative rule, named "Common Cause," shows how to combine the facts for "Whenever C then X" and "Whenever C then Y" into "Whenever C then X and Y" at s propositional level. 3. Preference Assessment: Every propositional fragment or aggregate is scored using a set of scoring rules. The score represents s measure of sentence quality. 4, Hill Ctimbing: Aggregation and Preference Assessment are aJternated under the control of a hill-climbing algorithm which seek.'s to maximize the overall quality of the collection, i.e. of the complete text. This allows a clean separation of the knowledge of what could be said from the choice of whet should be said. 5. Knowledge Filtering: Propositions identified by an extolicit model of the Reader's knowledge as known to the reader are not exl:resasd. The knowledge domain of KDS' largest example is a Fire Crisis domain, the knowledge of what happens when there is a fire in a computer room. The task was to cause the reader, a computer operator, to know what to do in all contingencies of fire. SYSTEI~ 1 (~OMPARISONS The most striking impression in comparing the two systems is that they have very little in common. In particular, 1. KDS has sentence scoring and a quslity.based selection of I~ow to say things; PROTEUS has no counterp;u't. 2. PROTEUS has a sophisticated grammar for which KOS has only a rudimentary counterpart, 3. PROTEUS has only a dynamic, redundancy-based P, nowledge filtering, whereas the filtering in KOS removes principally St=~tic, foreknown information. 4. KDS has clause-combining rules which make little use of conjunctions, whereas PROTEUS has no such rules but makes elaborate use of coniunctions. 5. KOS selects for brevity above all, whereas PROTEUS selects for contrast =hove all. 6. PROTEUS takes great advantage of fact significance assessment, which KDS does not use. They have little in common technically, yet both produce high quality text relative to predecessors. This raises an obvious question-- Could the techniques of the two systems be combined in an even more effective system? There is one prominent exception to this general lack of shared functions and characteristics, Recent text synthesis systems [Davey 79], [Mann end Moore 80], [Weiner 80], [Swartout 77], [Swartoutthesis 81], all include a facility for keeping certain facts or ideas from being expressed. There is an implicit or explicit model of the reader's knowledge. Any knowledge which is somehow seen as obvious to the reader is suppressed. All of the implemented facilities of this sort are rudimentary; many consist only of manually-ornduced lists or marks. However, it is clear that they cover a deep intellectual problem. Discourse generation must make differing uses of what the reader knows and what the reader does not know. It is absolutely essential to avoid tedious statement of "the obvious." Proper use of presupposition (which has not yet been attempted computationally) likewise depends on this knowledge, and many of the techniques for maintaining coherence depend on it as well. But identification of what is obvious to a reader is a difficult and mostly unexplored problem. Clearly, inference is deeply involved, but what is "obvious" does not match what is validly inferable. It appears that as computer-generated texts become larger the need for a robust model of the obvious will increase rapidly. POSSIBILITIES FOR SYNTHESIS This section views the collection of techniques which have been discussed so far from the point of view of a designer of a future text synthesis system. What are the design constraints which affect the possibility of particular combinations of these techniques? What combinations are advantageous? Since each system represents a compatible collection of techniques, it is only necessary to examine compatibility of the techniques of one system within the framework of the other. We begin by examining the hypothetical introduction of the KDS techniques of fragmentation, the explicit reader model, aggregation, preference scoring and hill climbing into PROTEUS. We then examine the hypothetical introduction of PROTEUS' grammar, fact significance assessments and use of the contrast heuristic into KDS. Finally we consider use of each system on the other's knowledge domain. Introducing KDS teohniques into PROTEUS Fragment and Compose is clearly usable within PROTEUS, since the information on the sequence of moves, particular move locations and the significance of each move all can be regarded as composed of many incleDendent propositions (fragments of the whole structure.) However, Fragment and Compose appears to give only small benefits, principally because the linear sequences of tic-tac-toe game transcripts give an acceptable organization and do not preclude many interesting texts. Aggregation is also useable, and would appear to allow for a greater 45 diverSity of sentence forms than Oavey's Secluential assembly torocedures allow. In KDS, and presumably in PROTEUS as well, aggregation rules can be used to make text brief, in effect, PROTEUS already has some aggregation, since the way its uses of conjunction shorten the text is similar to effects of aggregation rules in KDS. Prefei'ence judgment and Hill climbing are interQependent in KDS. Introducing both into PROTEUS would appear to give great improvement, especially in avoiding the long awkward referring phrases which PROTEUS i=roduced. The system could detect the excessively long constructs and give them lower scores, leading to choice of shorter sentences in those cases. The Explicit Reader model could also be used directly in PROTEUS; it would not help much however, since relatively little foreknowledge is involved in any tic-tac-toe game commentary/. Introducing PROTEUS techniques into KDS Systemic Grammar could be introduced into KDS to great advantage. The KDS grammar was deliberately chosen to be rudimentary in order to facilitate exploration above the sentence level. (In fact. KDS could not be extended in any interesting way without ulxJrading its grammar.) Even with a Systemic Grammar in KDS, aggregation rules would remain, functioning as sentence design elements. Fact significance assessments are also compatible with the KDS design. As in PROTEUS they would immediately follow aoduialtion of the basic grogositianeL They could improve the text significantly. The contrast heuristic (and other PROTEUS heuristics) would fit well into KDS, not as an a priori sentence design device but as a basis for assigning preference. Higher score for contrast would improve the text. In summary, the principal techniques appear to be completely compatible, and the combination would surely produce better text than either system alone. Exchange of Knowledge Domains The tic-tac-toe domain would fit early into KDS` but the KOS text-organization Drocesles (not discuased in this I:~ger) would have littJe to do. The fire crisis domain would be too complex for PROTEUS. It involves several actorS at once, several parallel contingencies and no single clear organizing principle. PROTEUS lacks the necessary text-organization methods. SHARED SHORTCOMINGS These systems share (with many others) the i=rimitive state of the computer.be,sad discourse-generation a~. Their groce~,~l are [=rimarily devoted to activities that go without notice among literate I~eogle. The deeper linguistic and metorical phenomena usually associated with the term "discourse" are hardly touched. These systems make little attempt at coherence, and they do not respond in amy way to the coherence (or lack of it) which they achieve. Presupposition, topic, focus, theme, the pro~er role of inference, imglicature, direct and indirect Sl:~ech act performance and a host of other relevant concepts all go unrepresented. Even wome, the 46 underlying conceotual agpars.tus in both systems is extremely adhoc and idiosyncratic, severely limiting the I=OSSiblities for using general knowledge of the semantics of English. Despite these deficiencies, the systems produce relatively smooth readable texL They are significant principally as collections of methods which may well survive to become l=arts of more competent and general.purOose systems of the future. CONCLUSIONS PROTEUS and KDS are distinct collections of discourse generation techniques. There is little overlap between them, and no a~patent conflict involved in using the techniques in combination. A system which combined the strengths of both would be significantly better than either. APPENDIX: TEXT SAMPLES FROM THE T~NO SYSTEMS KDS: Fire-Alarm Text for Computer Operstors Whenever there is a fire, the alarm system is started, which soul!de a bell and starts a timer. Ninety seconds after • e timer startlk unless the alarm system is cancelled, the system calls Wells Fargo. When Wells Fargo is called, they, in turn, call the Fire Department. When you hear the alarm bell or smell smoke, stop whatever you are doing, determine whether or not there is a • fire, and decide whether to permit the ala~;m system or to cancel it. When you determine whether there is a fire, if there is, permit the alarm system, othewvise cancel iL When you i~¢mit the alarm system, call the Fire Department if possible, then evacuate. When you cancel the alarm system, if it is more then 90 seconds since the timer started, the system will have called Wells Fargo already, cth~ continue what you were doing. .°oo°°o~.oo°. . . . . . . . . . . . . . . . . . . . . . . . . PROTEUS: The move sequence 3169547 Position 4 6 numbering The move=: 3 3 1 3169 316954 3169547 The game started with my taking a comer, and you took an adjacent one. I threatened you by taking the middle of the edge.opposite that and adjacent to the one which 1 had just taken but you blocked it and threatened me. I blocked your diagonal and forked you. If you had blocked mine, you would have forked me, but you took the middle of the edge oppoalte the corner which I took first and the one which you had just taken and so I won by completing my diagoned. References [Oavey79] [H=liday76] [Huddleston 71] [Hudson 71 ] [Mann and Moore [Mann 79] Davey, Anthony. Discourse Production. Edinburgh University Press, Edinburgh, 1979. Krese, G. R. (editor). System and Function in Language. Oxford University Press, London, 1976. Huddleston, R. D. The sentence in written English: a syntactic study based on an analysis of scientific texts. Cambridge University Press, London, 1971. Hudson, R. A. North Holland Linguistic Series. Volume 4" English complex sentences. North Holland, London and Amsterdam, 1971. 80] Mann, William C., and James A. Moore. Computer as Author-Results and Prospects. Research regort 79-82, USC/Informetion Sciences Institute, 1980. Mann, William C. and James A. Moore. ComDutar Generation of MultiDaregraDh English Text. 1979. AJCL, forthcoming. [Moore 79] [Sinclair 72] [Swartout 77] [Swartout 81 | [Weiner 80] [Winogred 72] Moore, James A., and W. C. Mann. A snapshot of KDS, a knowledge delivery system. In Proceedings ot the Conference. 17th Annual Meeting of the Association for Computational Linguistics, pages 51-52. August, 1979. Sinclair, J. Moll. A course in sl~oken English: Grammar. 1972. Swartout, William. A Digitalis Therapy Advisor with Explanations. Technical Rel~ort, MIT Laboratory for Computer Science, February, 1977. Swartout, William R. Producing Explanations and Justifications of Expert Consulting Programs. Technical Report Massachusetts Institute Technology/LCS/TR-251, Massachusetts Institute Technology, January, 1981. Weiner, J. L. BLAH, A System Which ExDlains its Reasoning. Artificial Intelligence 15:19-48, November, 1980. Winograd, Terry. Understanding Natural Language. Academic Press, Edinburgh, 1972. 4"/
1981
12
A GRAMMAR AND A LEXICON FOR A TEXT-PRODUCTION SYSTEM Christian M.I.M. Matthiessen USC/Information Sciences Institute ABSTRACT In a text-produqtion system high and special demands are placed on the grammar and the lexicon. This paper will view these comDonents in such a system (overview in section 1). First, the subcomponente dealing with semantic information and with syntactic information will be presented se!:arataly (section 2). The probtems of relating these two types of information are then identified (section 3). Finally, strategies designed to meet the problems are proDose¢l and discussed (section 4). One of the issues that will be illustrated is what happens when a systemic linguistic approach is combined with a Kt..ONE like knowledge representation • a novel and hitherto unexplored combination] 1. THE PLACE OF A GRAMMAR AND A LEXICON IN PENMAN This gaper will view a grammar and a lexicon as integral parts of a text production system (PENMAN). This perspective leads to certain recluirements on the form of the grammar and that of the eubparts of the lexicon and on the strategies for integrating these components with each other and with other parts of the system. In the course of the I~resentstion of the componentS, the subcomDonents and the integrating strategies, these requirements will be addressed. Here I will give a brief overview of the system. PENMAN is a successor tO KDS ([12], [14] and [13]) and is being created to produce muiti.sentential natural English text, It has as some of its componentS a knowledge domain, encoded in a KL.ONE like representation, a reader model, a text-planner, a lexicon, end a Sentence generator (called NIGEL). The grammar used in NIGEL is a Systemic Grammar of English of the type develol:~d by Michael Halliday • - see below for references. For present DurOoses the grammar, the lexic,n and their environment can be represented as shown in Figure 1. The lines enclose setS; the boxes are the linguistic compenents. The dotted lines represent parts that have been develoDed independently of the I~'esent project, but which are being implemented, refined and revised, and the continuous lines represent components whose design ill being developed within the project. The box labeled syntax stands for syntactic information, both of the general kind that iS needed to generate structures (the grammar;, the left part of the box) and of the more Sl~=cific kind that is needed for the syntactic definition of lexical items (the syntactic subentry of lexical items; to the right in the box -- the term lexicogrammar can also be uasd to denote both ends of the box). 1Thitl reBe•rcti web SUOl~fled by the Air Force Office of Scientific Re~lllrrJ1 contract NO. F49620-7~-¢-01St, The view~ and ¢OIX:IuIIonI contained in this document Me thoe~ of the author and ~ould not be intemretKI u neceB~mly ~ t J ~ ~ official goli¢iee or e~clors~mcm=, either e;~ore~ or im~isd. Of the Air FOrCAI Office of .~WIO R~rch ot the U.S. Government. The reeea¢ch re~t~ • joint effort end so ao tt~ =tm~ming from it whicti are the sub, tahoe Of this ml~'. I would like to thank in p~rt~cull=r WIIIklm MInn, who tieb helped i1~ think, given n~e ~ h~l ideaa sugg~o~l and commented extensively on dr.Jft= of th@ PaDre3, without him it ~ not be. I am ~ gretefu| tO Yeeutomo Fukumochi for he~p(ul commcmUI On I dran end to Michael Hldlldey, who h~ mecle clear to m@ rmmy sylRemz¢ i:~n¢iOl~ end In=Ught~ N•turelly, ] am eolefy reso¢~i~le for errors in the grelMmtetlon and contenL ' CONCEPTUALS J~ :::::::::::::::::::::::::::::::::::::::::::::::: i s¥ N T jiiiiii iiiii!iiliii!ii i Grammor ~i::i::i::il Lexls ii::~i!i!ilil I .................................. ] L ~iiii::i::iiiii~ii!iii~::~:::.::i~ii~ii~:.:::.:::.i:.i~ General Specific Lexicon Figure 1.1 : System overview. The other box (semamics) represents that part of semantics that has to do with our conceptualiz.~tion o: experience (distinct from the semantics of interaction -. speech acts stc, .- and the semantics of presentation -- theme structure, the distinction between given and new information etc.). It is shown as one part of what is called conceDtuals .. our general conceptual organization of the world around us and our own inner world; it is the linguistic part o! conceptuals. For the lexicon this means that lexical semantics is that part of conceptuals which has become laxicalized and thus enters into the structure of the vocabulary. There is also a correlation between conceptual organization and the organization of part of the grammar. The double arrow between the two boxes represents the mapping (realization or encoding) of semantics into syntax. For example, the concept SELL is mapped onto the verb sold? The grammar is the general Dart of the syntactic box, the part concerned with syntactic structures. The /exicon CUts across three levels: it has a semantic part, a syntactic part (isxis) and an orthographic part (or spelling; not present in the figure)? The lexicon 21 •m ul~ng the genec=l convention of cagitllizing terms clattering semantic entree=. C.~tak= will also i~l ueBd fo¢ rom~ aJmocieteo with conce~13 (like AGENT. RECIPIENT lu~ OI~ECT~ and for gcamm~ktical functions (like ACTOR. BENEFICIARY and GOAL). These notions will be introduced below. 3This me~m= that an ~ fo¢ a lexical item ¢on~L~ts of three sureties...4¢ i eBmlmtic wltry, • syrltacti¢ entry anti an orttlogrlkOhi¢ ontry. The lexicon box ~ ~howtt •~ containing g4e~l Of ~ syntax and secmlntic=l in the figt~te (ttiQ s~l~ area) to ern~lBize t~ nal~re of the isxicaJ entry, 49 consists entirely of independent lexical entries, each representing one lexicai item (t'ypicaJly a word). This figure, then, represents the i~art of the PENMAN text production system that includes the grammar, the lexicon and their immediate environment. PENMAN is at the design stage; conse¢lUantiy the discussinn that follows is tentative end exploratory rather than definitive. -- The ¢om!=onant that has advanced the farthest is the grammar. It has been implemented in NIGEL, the santo nee generator mentioned above. It has been tested and is currently being revised and extended. None of the other components (those demarcated by continuous lines) have been implemented; they have been tested only by way of hand examples. This groat will concentrate on the design features of the grammar rather than on the results of the implementation and testing of it. 2. THE COMPONENTS 2.1. Knowledge representation and semantics The knowledge representation One of the fundamental properties of the KL-ONE like knowledge representation (KR) is its intensional -- extensional distinction, the distinction between a general conceptual taxonomy and a second part of the representation where we find individuals which can exist, states of affairs which may be true etc. This is roughly a disbnction t:~ltween what is conceptuaiizaDle and actual conceptualizations (whether they are real or hypothetical). In the overview figure in section 1, the two are together called conceptuals. For instance, to use an example I will be using throughout this paper, there is an inteflsional concept SELL, about which no existence'D or location in time is claimed. An intenalonal concept is related to extensional concede by the relation Inclividuates: intenaionai SELL is related by individual instances of extensional SELLs by the Individuates relation. If I know that Joan sold Arthur ice-cream in the I~!rk, I have s SELL fixed in time which is part of an assertion about Joan and it Indiviluates intenaional SELL. 4 A concept has internal structure: it is a configuration of roles. The concept SELL has an internal ~ r e which is the three roles associated with it, viz. AGENT (the seller), RECIPIENT (the buyer) and OBJECT. These rolee are slot3 which are filled by other concepts and the domains over which these can very are defined as value restrictions. The AGENT of SELL is a PERSON or a FRANCHISE and sO on. tn ~,ther words, a ¢oncel~t is defined by its relation to other concepts (much aS in European structuraiism). These relations are roles a'~sociated with the concept, roles whose fillers are other concept¢ This gives rise to a large conceptual net. There is another reiation which helps define the place of a conoe=t in the conceptual net. viz. SuperCategory, which gives the conceptual net a taxonomic (or hierarchic) structure in addition to the structure defined by the role relations. The concept SELL ie defined by its I~lace in the taxonomy by having TRANSACTION as a SuperCate<jory. If we want to, 4It ~toul¢l be eml)t~ullz41~t ~tlt r.~lltng the cof~-.eot SELL 'u=y~l nothing wt'~lt=oe~t~r li~out ~ngli~tt exl~'qm~on for it:. ~e *'el.'lons for gz~ it filial ~ Ire I~urely fR~mo~i¢. o~ty way the conces=t elm be I~ocmted ~m ~ ~ =o/o' is tlw~gf~ ~g ~ of I we can define a conceot that will have SELL as a SuDerCategoq (i.e. bear the SuperCategory relation to SELL), for example SELLCB 'sell on the black market'. As a result, p)art of the taxonomy of events is TRANSACTION --- SELL .-- SELLOB. If TRANSACTION has a set of roles associated with it, this set may be inherited by SELL and by SELLOB .- this is a generaJ feature of the SuperCategory relation. In the examples involving SELL that follow, I will concentrate on this concept and not try to generalize to its supercategones. The Semantic Subentry In the overview figure (1.1), the semantics is shown as part of the concaptuais- The consequence of this is that the set of semantic entries in the lexicon is a subset of the set of concepts. The subset is groper if we assume that there are concepts which have not been lexicaiized (the assumption indicated in the figure). The a.csumption is I~erfectJy reasonable; I have already invented the concept SELLOB for which there is no word in standard English: it is not surprising if we have formed concepts for which we have to create expressions rather than pick them reedy.made from our lexicon. Furthermore, if we construct a conceptual component intended to support say a bilingual speaker, there will be a number of concepts which are lexicaiized in only one of the two languages.. A semantic entry, than, is a concept in the conceptuais- For sold, we find soil wiffi its associated roles, AGENT, RECIPIENT and OBJECT. The right ~ of figure 4.1 below (marked "se:'; after a figure from [1] gives a more detailed semantic ent~ for sold: = pointer identifies the relevant part in the KR, the concept that constitutes the semantic entry (here the concept SELL). The concept that constitutes the semantic entry of a lexicai item has a fairly rich structure. Roles are associated "with the concept and the modailty (neces~ury or optional), the ¢ardinaii~ of and restrictions on (value of) the fillers are given. Through the value restriction the linguistic notion of selection restriction is captured. The stone sold a carnation to the little girl is odd because the AGENT role of SELL is value restricted to PERSON or FRANCHISE and the concept associated with stone fails into neither type. The strategy of letting semantic entries be part of the knowledge representation would not have been possible in a notation designed to csgture specific propositions only, However, since KL-ONE pfoviles the distinction between intension and extension, the strategy is unl=rotolsmati¢ in the I=resant framework. So what is the relationship between intensional-extensionai and s~manti¢ entries? The working aesumption is that for a large part of the" vocaioulary, it is the concepts of the intanalonai part of the KR that may be lexicalized and thus serve as semantic entries. We have words for intenalonai obje¢=, actions and states, but not for indtviluai extensional obiects etc. with the exception of propel names. They have extensional concepts as their semantic entries. For instance, Alex denotes a particular individuated person and The War of the Roses a palrticula¢ individumed war. Both the Sul~H'Category relation and the Indiviluates relation provide ways of walking around in the KR to find expresmons for concepts. If 50 we are in the extensional part of the KR, looking at a particular individual, w~ can follow the Individuates link up to an intensional concept. There may be a word for it, in which case the concept is part of a laxical entry. If there is no word for the concept, we will have to consider the various options the grammar gives us for forming an ¢oPropriate exoressJon. The general assumption is that all the intensional vocabulary can he used for extensional concepts in the way just describe(l: exc)reasabi..,'y is inherited with the Individuates relation. Expression candidates for concepts can also be located along the SuberCate(Jory link by going from one concept to another one higher up in the taxonomy. Consider the following example: Joan sold Arthur ice.cream. The transaction took place in tl~e perk. The SuperCate~ory link enables us to go from SELL to TRANSACTION, where we find the expression transaction. Lexical Semantic Relations The structure of the vocabulary is parasitic on the conceptual structure. In other words, laxicalized concepts are related not only to one another, but also to concepts for which there is no word,encoding in English (i.e. non-laxicalized concepts). Crudely, the semantic structure of the lexicon can be described as being part of the hierarchy of intensional concepts -- the intensional concepts that happen to be lexicalized in English. -- The structure of English vocabulary is thus not the only principle that is reflected in the knowledge representation, but it is reflected. Very general concepts like OBJECT, THING and ACTION are at the top. In this hierarchy, roles are inherited. This corresponds to the semantic redundancy rules of a lexicon. Considering the possibility of walking around in the KR and the integration of texicalized and non.iexicalized concepts, the KR suggests itself as the natural place to state certain text-forming principles, some of which have been described under the terms lexical cohesion ([8]) and Thematic Progression ([6]). I will now turn to the syntactic component in figure 1-1, starting with a brief introduction to the framework (Systemic Linguistics) that does the same for that component as the notion of semantic net did for the component just discussed. 2,2. Lexicogrammar Systemic Linguistic~ stems from a British tradition and has been developed by its founder, Michael Halliday (e.g. [7], [9], [10]) and other systemic linguists (see e.g. [5], [4] for S presentation of Fawcett's interesting work on developing a systemic model within a cognitive model) for over twenty years covering many areas of linguistic concern, including studies of text, ;exicogrammar, language development, and computational applications. Systemic Grammar was used in SHRDLU [15] and more recently in another important contribution, Davey'a PROTEUS [3]. The systemic tradition recognizes a fundamental principle in the organization of language: the distinction between cl~oice and the structures that express (realize) choices. Choice is taken as primary and is given special recC,;]nition in the formalization of the systemic model of language. Consequently, a description is a specification of the choices a speaker can make together with statement:; about how he realizes a selection he has made. This realization of a set of choices is typically linear, e.g. a string of words. Each choice point is formalized as a ,system (hence the name Systemic). The options open to the speaker are two or more features that constitute alternatives which can' be chosen. The preconditions for the choice are entry conciitiona to the system. Entry conditions are logical expressions whose elementary terms are features. All but one of the systems have non.emt~/ entry conditions. This causes an interdependency among the systems with the result that the grammar of English forms one network of systems, which cluster when a feature in one system is (part of) the entry condition to another system. This dependency gives the network depth: it starts (at its "root") with very general choices. Other systems of choice depend on them (i.e. have a feature from one of these systems -- or st combination of features from more than one system .. as entry conditions) so that the systems of choice become less general (more delicate to use the, systemic term) as we move along in the network. The network of systems is where the control of the grammar resides, its non.deterministic part. Systemic grammar thus contrasts with many other formalisms in that choice is given explicit representation and is captured in a single ruis type (systems), not distributed over the grammar as e.g. optional rules of different types. This property of systemic grammar makes it s very useful component in a text-production system, seDecially in the interf3ce with semantics and in ensuring accessibility of alternatives. The rest of the grammar is deterministic .. the consequences of features chosen in the network of systems. These conse(luences are formalized as feature realization statements whose task is to build the appropriate structure. For example, in independent indicative sentences, English offers a choice between declarative and interroaative sentences, if interrooativ~ is chosen, this leeds to a dependent system with a choice between wh-intsrrooative and ves/no-interroaative. When the latter is chosen, it is realized by having ~.he FINITE verb before the SUBJECT. Since it is the general design of the grammar that is the focus of attention, I will not go through the algorithm for generating a sentence as it has been implemented in NIGEL. The general observation is that the results are very encouraging, although it is incomplete. The algorithm generates a wide range of English structures correctly. There have not been any serious problems in implementing a grammar written in the systemic notation. Before turning to the lexico, part of lexicogrammar, I will give an example of the toplevel structure of a sentence generated by the grammar. (I have left out the details of the internal structure of the constituents.) iiiii;o.i iIi i!o t Iiiiii i]]iiiliiiii I ........... .... I ............. .......... In the park| Join / sold | Arthur 14ce-¢reem 51 The structure consists of three layers of function symbols, aJl of which are needed to get the result desired... The structure is not only functional (with- function s/m/ools laloeling the const|tuents instead of category names like Noun Phrase and Verb Phrase) but it is multifunctional. Each layer of function symbols shows a particular perspective on the clause structure. Layer [1] gives the aspect of the sentence as a representation of our experience. The second layer structures the sentence as interaction between the speaker and the hearer;, the fact that SUBJECT precedes FINITE signals that.the speaker is giving the hearer information. Layer [3] represents a structuring of the clause as a message; the THEME is its starting point. The functions are called experiential, inte~emonal and textual resm~-~Jvety in the systemic framework: the function symbols are said to belong to three different metafunctions, in the rest of the !~koar I will concentrate on the experiential metafunction, I=artiy because it will turn out to be highly relevant to the lexicon. The syntactic sut3entry. In the systemic tradition, the syntactic part of the lexicon is seen as a continuation of grammar (hence the term lexicogrammar for both of them): lsxical choices are simply more detailed (delicate) than grammatical choices (cf. [9]). The vocabulary of English can be seen as one huge taxonomy, with Roget's Thesaurus as a very rough model. A taxonomic organization of the relevant Dart of the vocabulary of English is intended for PENMAN, but this Organization is part of the conceptual organization mentioned al0ove. There is st present no separate lexicai taxonomy. The syntactic subentry potentially con~sts of two parts. There is alv~ye the class specification .. the lexical features. This is a statement of the grammatical potential of the lexicai item, i.e. of how it can be used grammatically. For sold the'ctas,~ specification is the following: verb C'/I1~ |0 c~als 02 bemlf &ct, 1re where "benefactive" says that sold can occur in a sentence with a BENEFICIARY, "class 10" that it encodes a material pr~ (contrasting with mental, varbai and relational processes) and "CMas 02" that it is a tnmaltive verb. In ~ldition, there is a provision for a configurationai part, which is a h'agment of a Structure the grammar can generate, more specifically the experiential part of the grammar, s The structure corresponds to the top layer ( # [1]) in the example above. In reference to this example, I can make more explicit wh~ I mean by fragment. The general point is that (to take just one cimm as an example) the presence and cflara~er of functions like ACTOR, BENEFICIARY and GOAL .- diract t:~'ticiplmts in the event denoted by the verb .- depend on the type of verb, whereas the more circumstantial functions like LOCATION remain unaffected and a~oDlical=ie to all ~ of verb. Conse(luently, the information about the poasibilib/ of having a LOCATION constituent is not the type of information that has to be stated for specific lsxical items. The information given for them concerns only a fragment of the experiential functional structure. The full syntactic entry for sol~ is: PROCESS • veto class IO class 02 befloflctlve ACTOR • GOAL 8EX(FICZAR¥ " This says that sold Can occur in a fragment of a struCtUre where it is PROCESS and there can be an ACTOR, a GOAL and a RENEF1CIARY. The usefulness of the structure fragment will be demonstrated in section 4. 3. THE PROBLEM I will now turn to the fundamental proiolem of making a working s/stem out of the parts that have been discu~md. The problem ~ two parts to it. viz. 1. the design of the system as a system with int.egrated Darts and 2. the implementation of the system. I will only be concerned with the 6rat aspect here. The components of the system have been presented. What remains -. and that is the problem -- is to dealgn the misalng [inks; tO find the strategies that will do the job of connecting the components. Finding these strategies is a design problem in the following sense. The stnUegies do not come as accessories with the frameworks we have uasd (the systemic framework and the KL-ONE inspired knowledge reprasentatJon). Moreover, th~me two frameworks stem from two quite dispm'ate traditions with different sets of goals, symbols and terms. I will state the problem for the grammar first and then for the lexicon. As it has been presented, the grammar runs wik:l and free. It is organized Mound choice, to be sure, but there is nothing to relate the choices to the rest of the Wstem, in particular to what we can take to be semantics. In other word~k although the grammar may have • ~ that faces ~emantics .. the system network, which; in Hallldly'e worcls, is ~arnantically relevant grammar .- it does not mmke direct contact with semantics. And, if we know what we want the system to ante>de in a sentence, how can we indicate what goes where, that is what a constituent (e.9. the ACTOR) should encocle? The lexicon incorporates the problem of finding an ¢opropriate strategy to link the components to each other, since it cuts acrosa component boundn,des. The semantic and s/ntsctic subpaJts of a lexica| entry have been outlined, but nothing hall been sak:l about how they should be matched up with one ,.,nother. The reason why this match is not ~rfectly straightforward has to do with the fact that both entries may be sa'uctunm (conf,~urations) rather than s~ngle elements. In sedition, there are lexical relations that have not been accounted for yet, es~lcially synonymy and polysemy. 5Th~ conllgursb(mld ~ dQ~ not mira from the sylmm~ tn~libon, i~t is In .~m m me 17mont ckm~ 52 4. LOOKING FOR THE SOLUTIONS 4.1. The Grammar Choice experts and their domains. The control of the grammar resides in the n.etwork of systems. Choice experts can be developed to handle the choices in these systems. The idea is that there is an expert for each system in the network and that this expert knows what it takes to make a meaningful choice, what the factors influencing its choice are. it has at its disposal a table which tells it how to find the relevant pieces of information, which are somewhere in the knowledge domain, the text plan or the reader model. In other words, the part of the grammar that is related to Semantics is the part where the notion of choice is: the choice experts know about the Semantic consequences of the various choices in the grammar and do the job of relating syntcx tO semantics, s The recognition of different functional componenta of the grammar relates to the multi-funCtional character of a structure in systemic grimmer I mentioned in relsUon to the example In the park Joan sold Arthur ice.cream in section 2.2. The organization of the sentence into PROCESS, ACTOR, BENEFICIARY, GOAL, and LOCATIVE is an organization the grammar impeses on our experience, and it is the aspect of the organization of the Sentence that relates to the conceptual organization of the knowledge domain: it is in terms of this organization (and not e.g. SUBJECT, OBJECT, THEME and NEW INFORMATION) that the mapping between syntax and semlmtic,,i can be stated... The functional diver~ty Hailiday has provided for systemic grammar is useful in a text.production .slrstam; the other functJone find uses which space does note permit a discuesion of here. Pointers from cJonslituents. In order for the choice experts to be able to work, they must know where to look. Resume that we are working on in the park in our example Sentence in the park Joan sold Arthur ice.cream and that an expert has to decide whether park should be definite or not. The information about the status in the mind of the reader of the concept corre~oonding to park in this sentence is located at this conce~t: the ~ck is to ~mociats the concept with the constituent being built. In the example structure given earlier, in the park is both LOCATION and THEME, only the former of which is relevant to the present problem. The solution is to set a pointer to the relevant extensional concept when the function symbol LOCATION is inserted, so that LOCATION will carry the pointer and thus make the information attached to the concept 8ccaesible. 4.2. The lexicon and the lexlcal entry I have already inb-oducad the semantic subentry and the syntactic • ubentry. They are stated in a KL-ONE like representation and a systemic notation respec~vely. The queslion now is how to relate the two. In the knowledge representation the internal struc~Jre of a concept is a configuration of roles and these roles lead to new concepts to which the concept is related. A syntactic structure is seen as a configuration of aA ~ d~lnitk~n ot the h~i soTintlca ol tt~ gnlmm•r ik Is • nliA# ot IOl~'mlC, h0 "minimti~ • what ti~ Brlmm•~cll ~ ~ io~ at*. in the Ixment '4/mcusWon, I ~ focun~l on Ine know~dge domain one, ~ ~ this bl me mosl r~J~Im to MmiP.~ ~'T~li~. / function symbols; syntactic categories serve these functions -- in the generation of a structure the functions lead to an entry of a part of the network. For example, the function ACTOR leads to a part of the network whoSe entry feature is Nominal Group just ~s the role AGENT (of SELL) leads to the concept that is the filler of it. The parallel between the two representations in this area are the following: KRONLEDG[ REPRESENTATIOM SYNTACTIC REPRES[MTATION role fuflctton f 111el" exponent (Where exponent denotes the entry feature into a pm't of the network (e.g. Nominal Group) that the function leads to.) This parallel clears the path for a strmegy for relating the Semantic entry and the syntactic entry. The strategy is in keeping with current ideas in linguistics. "r Consider the following crude entry for sold, given here a.s an illustration: Subentl,les: Ii¢~ent~¢ syntactic ol,thogl,lpht¢ Functtoni Lextcel re&furls SELL- • PROCESS • vel,b "sold" concept Class 10 class 0Z blfleflttJve AGENT " ACTOR OBJECT • GOAL RECIPIENT • BEMEFICIAR¥ where the previously discussed semantic and syntactic subentries are repeated and paired off against each other. This full lexical entry makes clear the usefulness of the second part of the syntactic entry .. the fragment of the experiential functional structure in which sold can be the PROCESS. Another piece of the total picture siso falls into place now. The notion of a pointer from an experiential function like BENEFICIARY in the grammatical structure to a point in the conceptual net was introduced above. We can now see how this pointer may be Set for individual lexical items: it is introduced as a simple relation between a grammatical function symbol and s conceptual role in the iexical entry of e.g. SELL. Since there is an Indlviduates link between this intensionai concept and any extensional SELL the extensional concept that is part of the particular proposition that is being encoded grarnmaticaJly, the pointer is inherited and will point to a role in the extensional part of the knowledge domain. At this point, I will refer again to the figure below, whose dght half I have already referred to as a full example of a semantic subentry ("see"). "sp:" is the spelling or orthographi c subentry; "gee" is the syntactic s,,bentry. We have two configurations in the lexical ent~'y: in the Semantic subentry the concept plus a number of roles and in the syntactic subentry a number of grammaticsi functions. The match is represented in the.f_i~ure abov e by the arrows. 7The mectllmism for maOOing hu much in common with ~ develooed for Cexical Functlon~ G ~ (lee e.g, {21), idlb'tough tM 14~ebl are not tP4 same. The entry • lexic~d enu,/in ~ PIm-LexicaJism hlunework devJooed by Hudson in [11 ]. 53 g~ c~--, 02 ac~ C ( ) ,..OA., .-.....\ ..... /.I \ \ FIgure 4-1: Lexical entry for sold in the first step I introduced the KL-ONE like knowledge representation All three roles of SELL have the modaJity "r~c~___,~_~'. This does not dictate the grammatical pos.~bilities. The grammar in Nigei offers a choice between e.g. They sold many books to their customers and The book sold well, In the second example, the grammar only Dicks out a subset of the roles of SELL for expras~on. In other words, the grammar makes the adoption of different persl~¢tives possible. II I can now return to the ol:~ervation that the functional diversity Hallidey has provldat for systemic grammar is useful for our pu~__o'-'e~-__; The fact that grammatical structure is multi.layered means that those aspects of grammatical structure that are relevant to the mapping between the two lexical entries are identified, made explicit (as ACTOR BENEFICIARY etc.) and kept seperate from pdnciplas of grernmatical structuring that are not directly relevant to this mapl:dng (e.g. SUBJECT, NEW and THEME). In conclusion, a stretegy for accounting for synonymy and polysemy can be mentioned. The way to cagture synonymy is to allow a concept to be the semantic subentry for two distinct orthographic entries. If the items are syntactically identical as well. they will also share a syntactic subentry. Polyeemy works the other way:. there may be more than one concept for the same syntactic subentry. 5. CONCLUSION I have discus.s~l a gremmm" and a lexicon for PENMAN in two steps. F~rst I looked at them a~ independent components -- the semantic entry, the grammar and the syntactic entry -- and then, after identifying the problems of integrating them into a system, I tumed to strategies for re!sting the grammar to the conceptual representation and the syntactic entry to the semantic one within the lexicon. and the systemic notation and indicated how their design features can be Out to good use in PENMAN. For instance, the distinction between intension and exten*on in the knowledge representation makes it I~OS.~ble to let iexical semantic~ be part of the conceptuals. It was also suggested that the relations SuberC.,at~gory and Indivlduates can be to find expre~-~ions for a particular concept. The second steO attempted to connect the grammar to semantics through the notion of the choice expel, making use of a design principle of systemic grammars where the notion of choice is taken as ba~c. I pointed out the correlation between the structure of a concept and the notion of structure in the systemic framework and allowed how the two can be matched in a lexical entry and in the generation of a sentence, a slrstegy that could be adopted because of the multl.funotional nature of structure in systemic grammars. This second step has been at the same time an attempt to start exploring the potential of a combination of a KL-ONE like representation and a Sy~emic Grammar. Although many ~%oects have had to be left out of the discussion, there are s number of issues that are of linguistic interest and significance. The most basic one is perhal~ the task itself:, designing • model where a grammar and a lexicon can actually be mate to function as more than just structure generators. One issue reiatat to this that has been brought uD was that different ~ external to the grammar find resonance in different I=ari~ of the grammar and that there is a partial correlation between tim conceptual structure of the knowleclge reOresentation and the grammar and lexicon. AS was empha.~zacl in the introduction, PENMAN is at the design stage: there is a working sentence generator, but the other 8.qDect~ of what has been di$cut~tecl have not been imDlement~l and there is no commitment yet to a frozen design. Naturally, a large number of problems still await their solution, even at the level of design and, cleerly, many of them will have to wait. For example, selectivity among terms, beyond referential acle¢luacy, is not adclressecl. sl~ly ot ~ the func'UoNd sW~Uctt¢ ~ ~.k u0 dlff~ ~ ot • P.,cbrl¢~ ~ IcI0~ d~clNm~ I~tI~¢1~ fll'ldl m~ W ~ Q.Q. ~ ~ trlMIl~l~lt ¢4 ~4u¢1 tikQ ~uJy ~ ~ ~ g/~ ~ tO¢l~vO ~ in ~ IcC0urd for nocnm4UIT~ClonL 54 In general, while noting correlations between linguistic organization and conceptual organization, we do not want the relation tO be deterministic: part of being a good varbaiizar is being able to adopt different viewpoints -- verbalize the same knowledge in different ways. This is clearly an ares for future research. Hopefully, ideas such as grammars organized around choice and cl~oice experts will ;)rove useful tools in working out extensions. REFERENCES Brachman, Roneld, A Structural Paradigm for Representing Knowledge, Bolt, Beranek, and Newman, Inc., Technical Report, 1978. 3. 4. 5. 6. Bresnan, J., "Polyadicity: Part I of s Theory of LexicaJ Rules and Representation," in Hoekstra, van dar Hulst & Moortgat (eds.), Lexical Grammar, Dordrecht, 1980. Davey, Anthony, Discourse Production, Edinburgh Univer~ty Press, Fdinburgh, 1979. Fawcett, Robin P., Exeter Linguistic Studies. Volume 3: CognitiveLinguistics and Social Interaction, Julius Groos Vedag Heidelberg and Exeter University, t 980. Fawcett, R. P., Systemic Functiomd Grammar in a Cognitive Model of Language. University College, London. MImeo, 1973 Danes, F., ed., Papers on Functional Sentence Perspective, Academia, Publishing House of the Czechoslovak Academy of Sciences, 1974. 7. 8. 9. 10. 11. 12. 13. 14. 15, Helliday, M. A. K., "'Categories of the theory of grammar'," Word 17, 1961. Halliday M. A. K. and R. Has;m, Cohesion in English, Longman, London, 1976. English Language Sod(m, Title No. 9 Halliday, M.A.K., System and Function in Languege, Oxford University Press, London, 1976. Hudson, R. A., North Holland Linguistic Series. Volume 4: English complex sentences, North Holland, London and Arnstardam, 1971. Hudson, R. A., DDG Working Psper¢ University College, London. Mimeo, 1980 Mann, William C., and James A. Moore, Computer as Author.-Resulls and Prospects, USC/Informatlon Sciences Institute, Research report 79-82, 1980. Mann, William C. and James A. Moore, Computer GenQration of MuRiparagradh English Text, 1979. AJCL, forthcoming. Moore, James A., and W. C. Mann, "A snlo6hot of KDS, a knowledge delivery system," in Proceedings of the Conference, 17th Annual Meeting of the Association for Computational Linguistics, pp. 51-52, AuguSt 1979. Winogred, Terry, Understanding Natural Language, Academic Press, Edinburgh, 1972. 55
1981
13
Language Production: the Source ofthe Dictionary David D. McDonald University of Massachusetts at Amherst April 1980 Abstract Ultimately in any natural language production system the largest amount of human effort will go into the construction of the dictionary: the data base that associates objects and relations in the program's domain with the words and phrases that could be used to describe them. This paper describes a technique for basing the dictionary directly on the semantic abstraction network used for the domain knowledge itself, taking advantage of the inheritance and specialization machanisms of a network formalism such as r,L-ON~ The technique creates eonsidcrable economies of scale, and makes possible the automatic description of individual objects according to their position in the semantic net. Furthermore, because the process of deciding what properties to use in an object's description is now given over to a common procedure, we can write general-purpose rules to, for example, avoid redundancy or grammatically awkward constructionS. Regardless of its design, every system for natural !anguage production begins by selecting objects and relations from the speaker's internal model of the world, and proceeds by choosing an English phrase to describe each selected item, combining them according to the properties of the phrases and the constraints of the language's grammar and rhetoric. TO do this, the system must have a data base of some sort, in which the objects it will talk about are somewhow associated with the appropriate word or phrase (or with procedures that will construct them). 1 will refer to such a data base as a dictionary. Evcry production system has a dictionary in one form or another, and its compilation is probably the single most tedious job that the human designer must perform. In the past. typically every object and relation has been given its own individual "lex" property with the literal phrase to be used; no attempt was made to share criteria or sub-phrases between properties; and there was a tacit a~umtion that the phrase would have the right form and content in any of the contexts that the object will be mentioned. (For a review of this literature, see r~a .) However, dictionaries built in this way become increasingly harder to maintain as programs become larger and their discourse more sophisticated. We would like instead some way to de the extention of the dictionary direcdy to the extention of the program's knowledge base; then, as the knowledge base expands the dictionary will expand with it with only a minimum of additional cffort. This paper describes a technique for adapting a semantic abstraction hierarchy of thc sort providcd by ~d~-ONE ~:1.] to function directly as a dictionary for my production system MUMIII.I~ [,q'~. . Its goal is largely expositional in the sense that while the technique is fully spocificd and proto-types have been run, many implementation questions remain to be explored and it is thus premature to prescnt it as a polished system for others to use; instead, this paper is intended as a presentation of the issues--potcntial economicw---that the technique is addressing. In particular, given the intimate relationship between the choice of architecture in the network formalism used and the ability uf the dictionary to incorporate linguistically useful generalizations and utilities, this presentation may suggest additional criteria for networ k design, namely to make it easier to talk about the objects the network The basic idea of "piggybacking" the dictionary onto the speaker's regular semantic net can be illustrated very simply: Consider the KL.ONE network in figure one, a fragment taken from a conceptual taxonomy for augmented transition nets (given in [klune]). The dictionary will provide the means to describe individual concepts (filled ellipses) on the basis of their links to generic concepts lempty ellipses) and their functional roles (squar~s), as shown there for the individual concept "C205". The default English description of C205 (i.e. "the jump arc fi'om S/NP to S/DCL") is created recursiveiy from dL.~riptions of the three network relations that C205 participates in: its "supercuneept" link to the concept "jump-are". and its two role-value relations: "source-stateIC205)=S/NP" and "next- state(C205)=S/t:~Ct.". Intuitively. we want to associate each of the network objects with an English phrase: the concept "art'" with the word "art"', the "source-state" role relation with the phrase "C205 comes from S/NF" (note the embedded references), and so on. The machinery that actually brings about this ~sociation is, of course, much more elaborate, involving three different recta-level networks describing the whole of the original, "domain" network, as well as an explicit representation of the English grammar (i.e. it Ls itsclf expressed in rd,-oN~). role links ~ • ~ test ~ action value-.restriction links IL_ value links "The jump arc from S./NP to S/DCL" Figure One: the speaker's original network What does this rather expensive I computational machinery purchase? There are numrous benefits: The most obvious is the economy of scale within the dictionary that is gained by drawing directly on the economies [. What is cxpensive to represcnt in an explicit, declarative structure need not be expensive wllen translated into pn~ccdurai forth. ] do not seriously expect anyone to implement suctl a dicti()nary by interpreting the Y-.I.-ON,~, structures themselves; given tmr present hardware such a tact would be hopelessly inel]icient. Instead, a compilation pnx:css will in effective "compact" the explicit version of thc dictionary in~t~ an expeditious,, space.- expensive (i.e. heavily redundant} version that pc:rfbrms each inheritance only once and fl~eu runs as an efficient, self-contained procedure. 57 alr,,:.~dy prcsent in the network: a one-time liuguistic annotation of the nctwork's generic concepts aod relations can be passed down to describe arbitrary numbcrs of instantiating individuals by following general rules based on the geography of thc network. At thc same time. the dictionary "cmr~ " ['or a object in the nctwork may be ~pcciaiizcd and hand-tailored, if desired, in order to take advantage of special words or idiomadc phrases or it may inherit partial dct'auk reali~ation~ e.g. just ['or determiners or ad~erbia| modifiers, while specializing, its uther parts. More generally. because we ha~c now retried the procc~ of collecting the "raw material" of Lhe production process (i.e. scanning the nctw(,rk), we c:m imp(vse rules and constraints on it just ,xs thougi~ it were another part of the production planning process; we can develop a dictionary gnmm~ur entirely analogous to our gramm.'~r of l'nglish. This allows us to filter or mmsform the collection pnx:css under contextual cuntnd according to general nlles, and thereby, among edict things, automatically avoid rcdundancics ur violations o[' grammatical constraints such as complex-NP. In order to adapt a semantic net for use a~ a dictionary we must dctermthe three points: (1) What type of linguistic annotation to use--just what is to be associated with the nodes ufa network? (2) How annotations from individual nodes are to be accumulatcd~what dictates the pattern in which the network is scanned? (3) How the accumulation process is made sensitive to context. 'lllese will be the ft~us of the rest oft he paper. l'hc three points of the desigu arc. of course, mutually dcpendcnt, and are ['urther dependent on the requirements of the dictionary's cmploye~, the planning and [inguLstic realization componants or" the produc'3on system, in the interests of space I will not go into the details of these components in this paper, especially as this dictionary desigu appears to be ,~ful I%r more than lust my own particular production system. My assumptions are: (t) that the output ot the dictionary (the Input to my realization component) is a representation of a natural language phrase as defined by the grammar and with both words and other objects from the domain network as its terminals (the embedded domain objects correspond to the variable parts of'the phrase, i.e. the arguments to the original network relation): and (2) that the planning process (the component that decides what to say) will specify that network objects be described either as a composition era set of other network relations that it has explicitly selected, or else will leave the de~:riptiun to a default given in the dictionary. Meta-level annotation "]'he basis of the dictionary is a meta-/evel network constructed so as to shadow the domain network used by the rest of the speaker's cognitive processes. "['his "dictionary network" describes the domain network from the point of view of d1¢ accumulation procedure and the linguistic annotation. [t is itself an abstraction hierarchy, and is also expressed in xL. ON"~ (though see the earlier ['ootuot¢). Objects in the regular network are connected hy recta-links to their corresponding dictionary "entries". These entries are represcntaUons of English phra.¢x.~ (either a single phrase or word or a cluster of alternative phrases with some decision-criteria to s¢lcet among them at run dine). When we want to describe an object, we follow out its recta-link inzo the dictionary network and then realize the word or phrase that we find. Specializing Generic Phrases "['he enu'y for an objcct may itself have a hicrarcifical structure that parallels point fi)r point the I~ierarehical sU'ucture of the object's deseription in the domain. Figure two slzows the section of the dicti:mary network that annotates the supen:oncept chain front "jump-an:" to "object"; comparable dictionary networks can be built [.or hierarchies of roles or other hierarchical network structures. Noticc how the use of an inheritance m~hanisrn within the dictionary network (denoted by the vcrticat [inks betwccn roles) allows us on the one hand to state the determiner decision (show, bern only as a cloud) once and for all at thc level of the domain conccpt "object", while at the same time we can vo:umulate or supplant lexk:al material as we move down to more specific levels in the domain nctwork. Rgure Two: the recta-level dictionary network After all the inhent*n~c is factored in. dt¢ entry for. e.g., the generic concept "lump-ate" will de~:.ribe a noun phrase (represented by an thdiviual ¢oilcept in K.i..O~t;) ~,,hose head position, is filled lly the word "arc', classifier position by "jump", and whose determiner will be calculated (at run time) by die same roudne that calculated detemlinen ['or objects in general (e.g. it will react Io whedlcr 'Jt¢ reference is to a generic or an individual to how. many other objects have the same dcseription, to whether any spec~ contrustive effects are intended, etc. see [q'~ !). Should the planner d,'x:ide to use this entry by itself, say to produce "C205 is[ajump arc]", this dccripdon from the dictionary nctwork would be eonvercd to a proper constituent structure and integrated with the rest of the utterance under production. However. the entry will often be used in conjunction with the entries for several other domain objects, in which it is first manipulated as a deseription--constraint statement--in order to determine what 8ramroadcal consuuction(s) would realize the objects as a group. The notion of crea~ng a consolidated English phrase out of the phr~ t'or several different objects is central to the power of this dictionary. '['he designer is only expected to explicitly designate words for the generic objects in the domain network; the entries for the individual objects that the geueric objecLs de,scribe :rod cvcn the entries for a hicntrehical chain such as in figure two should typically be constructablo by default by fullowing general-purpo,Je linguistic rules and combination heud=ies. 58 t" Large entries out of small ones Figure three shows a sketch of the combination process, Here we need a dictionary entry to describe the relationship between the specific jump-arc C205 and the state it leads to, S/DCL, i.e. we want something like the sentence "(6"205) goes to (S/DCL)". where the refercnces in angle brackets would be ultimately replaced by their own English phrases. When the connecdng role relation ("next-state") can bc rendered into English by a conventional pattern, wc can use an automatic combination technique as in the figure to construct a linguistic relationship for the domain onc by using a conventional dictionary entry for the concept-role-value relations as specialized by the specific entry for thc role "next-state". The figure shows diagramaiically thc relationship between the domain network relation, its recta-level description as an object in the network fomlalism (i.e. it is an instance of a concept linked to one of its roles linked in turn to the roic value), and finally the corresponding conventional linguistic construction. The actoal Zl,.O~t; reprcscntation of this relation is considerably more elaborate since the links themselves are reified, however this sketch shows the rclevant level of detail as regards what kinds of knowledge arc nccded in or'tier to assemble the entry R [raducable-v~ goes to I JUMP-ARC blV:CONCEPT__ROt _V*LUE) ; ; \ CaAS'C-CLAUS J" Figure Three: Combining Entries by Network Relations procedurally. First the domain reladon is picked out and categorized: here this was done by a the conventional recta-level description of the relation in terms of the VJ,.ONE primitives it was built from, below we will see how a comparable categorization can be done on a purely linguistic basis. With the relation categorized, we can associated it with an entry in the dictionary network, in this ease an instance of a "basic-clause" (i.e. one without any adjuncts or rom-transfomaations). We now have determined a mapping from the entries for the components of the original domain relation to linguistic roles within a clause and have. in effect, created the relation's entry which we could then compile for efficiency. There is much more to be said about how the "embedded entries" can be controlled, how, for example, the planner can arrange to say either "C205 goes to S/DCL" or "There is a jump arc going to S/DCL" by dynamically specializing the description of the clause, however it would be taking us too far afield: the interested reader is referred to [thesisl. The point to be made here is just that the writer of the dictionary has an option either to write specific dictionary entries for domain relations, or to leave them to general "macro entries" that will build them out of the entries for the objects involved as just sketched. Using the macro entries of course meau that less effort v, ill be needed over all, but using specific entries permits one to rake advantage of special idioms or variable phrases that are either not productive enough or not easy enough to pick out in a standard recta-level description of the domain network to be worth writing macro entries for. A simple example would be a special entry for when one plans to describe an arc in terms of both its source and its nexi states: in this case there is a nice compaction available by using die verb "connect" in a single clause (instead of one clause for each role). Since the ~I,-O~F. formalism has no transparent means of optionally bundling two roles into one, this compound rcladon has to be given its own dictionary entry by hand. Making colnbinations linguistically Up to this point, we have been looking at associations between "organic" objects or relations in the domain network and their dictionary entries for production. It is often the case however, that the speech planner will want to talk about combinations of objects or complex relations that have been assembled just for the occasion of one conversation and have no natural counterpart within the regular domain network. In a case like this there wuuld not already be an entry in the dictionary for the new relation; however, in most eases we can still produce an integrated phrase by looking at how the components of the new relation can combine linguistically. These linguistic combinations are not so much the provence of the dictionary as of my linguistic realization component. MuMnI,E. ~.IUSIBLE has the ability to perform what in the early days of transformational generative grammar were referred to as "gcneraliT.ed transformations": the combining of two or more phrases into a single phrase on the basis of their linguistic descriptions. We have an example of this in the original example of the default description ofC205 as "the jump arc fram S/N P to S/DC L". This phrase was produced by having the default planner construct an expression indicating which network relations to combine (or more precisely, which phrases to combine, the phrases being taken from the entries of the relations), and then pass the expression to MI.MnLE which produces the "compound" phrase on the basis of the linguistic description of the argument phrases. The expression would look roughly like this: 1 (describe C205 as (and [np Ihejumparcl [clau:~ C205 [rcdueable-vp Comes from S/NP ] } [clause C205 [rcducable'~p goes lo S/OCL I ] MUMBLE's task is the production of an object description front the raw material of a noun phrase and two clauses. To do this, it will have to match die three phrases against one of its known linguistic combination patterns, just as the individual concept, role, and value were matched by a pattern from the Itt,.ONl.: representation formalism. In this case, it characterizes the trio as combinable through the adjunction of. the two clauses to the noun phrase as qualifiers. Additionally. the rhetorical label "rcdueable-vp" in the clauses indicates that their verbs can be omitted without losing significant 1. A "phrase" in a dictionary entry does not cnnsist simply of a string of words, They are actually schemata specifying the grammatical and rl~etorical relationships that the words and argument d(unain objects participate in according to their functional n~/cs. The bracketed CXl)rcssious shown in the cxprc.~ion are fur expository purposes only and are modeled on the usual representation ft~r iJhraso structure. I-mbedded objects such as "C205" or "S/NP" will be replaced by their own English phrases incrementally as the containing phrases is realized, 59 intbrmation, triggering a stylistic transformation co shorten and simplify the phrase. At this point MUMIIU': h;LS a linguistic reprcsenmtion of its decision which is turned ovcr to the normal realization pruccss For completion. Exauszivc details of these operations may be found in ["1~ . Contextual Effects The mechanisms of the dictionary per se perform two ~ncdons: (l) the association of the "ground level" linguistic phrases with the objeets of the domain network, and (2) the proper paczeros for accumulating the linguistic dcscriptions of other parts of the domain network so as to describe complex generic relatioos or to describe individual concepts in terms of their specific rela0ons and thcir generic description (as widt C205). On top of these two levels is graRcd a third lcvcl of contextually-triggered effects; these effects are carried out by MUMI|IJ." {the component that is maintaining the linguistic context that is the source of the uiggcrs). ~ting at the point where combinations are submitted to it as just described. Tu best illustrate the contextual cffec~ wc should mm, e to a slightly more complex example, o,c that is initiated by the speaker's planning process rathcr by than a defnuiL Suppose that the speaker is talking about. the A r.~ state "SI(")CL" and wants to say in effect that it is part of the domain relation "ncxt-s~ite(C205)=SIIX~L". The default way to express this reladon is as a Fact about the jump arc C"205: but what we ~r¢ doing now is to use it as Fact about S/DCL which will require the production of a quite different ph~Lse. The planning process expresses this intention to MU.MIn.E with the ~[Iowing expression: (say-about C205 that (next-state C205 S/DCL)) The operator "say-about" is responsible for detcnnining, on the basis of the dictionary's description of the "neat-state" rcladon, what [-~ngiish construction to use in order to express the ~peaker's intentcd focus. When the dictionary contains several possible renlizating phrases for a relation (For example "next-.,4a~C'~5) L~ the nezI slate after soun~J, au~C'z~)" Of %e.,.-s~u~C205) ~ the target of C2o.s"). then "say-about" will have to choo~ between the reafiz~tions on the basis either of some stylistic criteria, For example whether one of the contained relations had been mentioned recently or ~me default (e.g. "sm~-~,~C'..0~"). Let us suppose for present purposes that the only phrase listed in dictionary for the next-state relation is the one from the first example, Le. Now. "say-about"s goal is a sentence that has S/DCL as its subje=. It can tell from the dictionary's annotauon and its English grammar that the phrase as it stands will not permit this since the verb "go to" does not passiviz¢; however, the phrase is amenable to a kind of deffiog transformation that would yield the text: "S/DCL L~ where C205 goe~ to'. "Say-about" arraogcs for this consu'uccion by building the structure below as its representation ofi~ decision, passing it on to .~R:),mu.: for realizatiou. Note ~at this structure :'- .,.,.,.,.,.,.,.,.,~sentially a linguistic constituent structure of the .sual sort, describing the (annotated) surtace sU-ucture of dze intended text co the depth that "say-abouC' has planned it, 60 dllu~ [sul~-ctl [prmlte~ml [rea~,~-~l [wn.trac-I Figure Four:. the output of the "say-about" operator The ~nctional labels marking the constituent positions (i.e. "subject", "verb", ccc.) control the options for the realization of the domain-network objects they initially con=in. (The objects will be subscquendy replaced by the phrases that reafizc thcm. processing from leR to righc) Thus the first instance of S/I)CI_ in the subject position, is realized without contextual effects as the name ".V/DCL": while the second instance, acting as the reladve pronoun fur the cleft, is realized as the interrogative pronoun "where": and the final instance, embedded within the "next-state" relation, is suprcsscd entirely even though the rest of the relation is expre.~cd normally. These cnutextoal variations are all entirely transparent to the dictionary mechanisms and demonstrate how we can increa~ the utility of the phrases by carefully annotating them in the dictionary and using general purpose operations chat are ~ggered by the descriptions of the phrases alone, therefore not needing to know anything about their semant~ content. This example was of contextual effects that applied aRer the domain objects had been embedded in a linguistic structure, l.inguis~c context can have its effect eadier as well by monitoring the aecumuladon p~occ~ and appiyiog its effects at that level. Considering how the phrase for the jump are C2.05 would be fonned in this same example. Since the planner's original insmaction (i.e. "(say-abm,t_ )" did not mention C205 spccifcally, the description of that ubjec~ will be IeR to the default precis discussed earlier. In the original example, C205 was dc~ribed in issoladon, her= it L~ part of an ongoing dJscou~e context which muse be allowed ru influence the proton. The default description employed all three of the domain-network relations that C205 is involved in. In this discourse context, however, one of those relations, "neat-smte(c2OS)=SIDCL". has already be given in the text: were we to include it in this realization of C'205. the result would be garishly redundant and quite unnatural, i.e. "3/DCL ~ where the jump arc from S/NP Io S/DCL goes to". To rule out this realization, we can filterttm original set of three relations, eliminating the redundant relation bemuse we know that it is already mentioned in the CCXL Doing this en~ils (1) having some way to recognize when a relauon is already given in the text. and (2) a predictable point in the preec~ when the filtering can be done. rha second is smaight fo~arcL the "describe-as" fimetion is the interface between the planner and the re',dization components; we simply add a cheek in t~t function to scan through the list of relation-entries to bc combined and arrange for given relations to be filtered ouc. As fi)r the definition of "given". MUMBLE maintains a multi-purpose record of the cunmnt discourse context which, like the dictionary, is a recta- level network describing the original speaker's network from yet this other point of view. Nlem-links connect relations in the speaker's network with the mics they currendy play in ~be ongoing discourse, as illustrated in figure five. l~te definition of "give n" in terms of properties defined by discou~e roles such as these in conjunction with hcuristics about how much of the earlier text i~ likcly to still be rcmcmbered. ••ureo.state . . . . Current Discourse Conte~ ~s/ocL ~,h~l," current-clausJ he / ad(cu rront- relative-clause) subject(cu f rent.sentence) Figure Four: using the discourse-context as a filter Once able to refer to a rich, linguistically annotated description of the context, the powers of the dictionary can be extended still further to incorporate contextually-triggered transformations to avoid stylistically awkward or ungrammatical linguistic combinations. This part of the dictionary design is still being elaborated, so l will say only what sort of effects are trying to be achieved. Consider what was done earlier by the "say-about' function: there the planner proposed to say Something about one object by saying a relation in which the object was involved, the text choosen for the relation being specially transformed to insure that its thematic subject was the object in question, in these situations, the planner decides to use the relatinos it does without any particular regard for their potential linguistic structure. This means that there is a certain potential for linguistic disaster. Suppose we wanted to use our earlier trio of relations about C205 as the basis of a question about S/DCI,; that is, suppose our planner is a program that is building up an augmented transition net in response to a description fed to it by its human user and that it has reached a point where it knows that there is a sub-network of the ATN that begins with the state S/DCI. but it does not yet know how that sub-network is reached. (This would be as if the network of figure one had the "unknown-state" in place of S/NP.) Such a planner would be motivated to ask its user: (what <state> is-.~Jeh-thnt next-state(C20S)=<state>) Realizing this question will mean coming up with a description of C205. that name being one made up by the planner rather than the user. It can of course be described in terms of its properties as already shown; however, if dais description were done without appreciating that it oecured in the middle of a question, it would be possible to produce the nonsense sentence: " where does the jump arc from lead to S/DCL?' Here the embedded reference to the "unknown-state" (part of the relation, "source-state(C205)=unknown-state") appearcd in the text as a rclative clause qualiF/ing the reference to "the jump arc". Buc because "unknown- state" was being questioncd the English grammar automatically suppressed iL This lead R) the nonsense result shown because, as linguists have noted, in English one cannot question a noun phrase out of a relative clause--that would be a violation of an "island constraint" C¢. ~.. Tlle problem is, of course, that the critical relation ended up in a relative clause rather than in a different part of the sentence where is suppression would have been normal, It was not inevitable that the nonsense form was chosen; there are equally expressive ~ersions of the same content, e.g. "where does the jump arc to S/DCI. come from?', the problem is how is a planner who knows nothing about grammatical principles and does not maintain a linguistic description of the current context to know not to choose tile nonsense form when confronted with ostensibly synomous alternatives. The answer as [ see it is that the selection should not be the planner's problem--that we can leave the job to the linguistic realization component which already maintains the necessary knowledge base. What we do is to make the violation of a grammatical constraint such ,as this one of the criteria for filtering out realizations when a dictionary entry provides several synonomous choices, [n dais case, the choice was made by a general transformation already within the realization component and the alternative would be taken from a knowledge of linguistically equivalent ways to ajoin the relations. A grammatical dictionary filter like this one for island-constraintS could also be use for the maintaince of discourse focus or for stylistic heuristics such as wheth(:r to omit a reducable verb. In general, any decision criteria that is common to all of the dictionary entries should be amenable to being abstracted out into a mechanism such as this at which point they can act transparendy to the planner and thereby gain an important modularity of linguistic and conceptual/pragmatic criteria. "['he potential problems with this technique involve questions of how much information the planner can rcasenably be expected to supply the linguistic componenL The above filter would be impossible, for example, if the macro-entry where it is applied were not able to notice that the embedded description of C205 could mention the "unknown-state" before it committed itself to ),he overall structure of the question. The sort of indexing required to do this does not seem unreasonable to me as long as the indexes are passed up with the ground dictionary entries to the macro- entries. Exactly how to do this is one of the pending questions of implementation. 61 t • • The dictionaries of other production systems in the literature have typically been either trivial. ~,nconditionai object to word mappi.gs Cf3, C'~3 , orelse been encoded in uncxtcndable procedures CZ.3. A notable exception is the decision tree technique of[goldman] and as refined by researchers at the Yale Artificial Intelligence Protect. The improvements of' the present technique over decision trees (which it otherwise resembles) can be found (1) in the sophistication of its representation or" the target English phrases, whereby abstract descriptions of tile rhetorical and syntactic structure of the phrases may be manipulated by general rules that need not know anything about their pragmatic content: and (2) in its ability to compile decision criteria and candidate phrases dynamically for new objects or relations in terms of r.hc criteria and phrases from their generic descriptions. l'hc dictionary described in this paper is not critically dependent on the details of" the [ingui'~tic reali~,.ation component or planning component it is used in conjunction with. It is designed, however, to make maximum use or" whatever constraints ,nay be available f'n)m the linguistic context (broadly construed) or from parallel intentional goals. Consequcndy. componcnts that do not cmploy MI.'3,IBI.E'$ tc~hniquc of represcnting the planned and already spoken parts of. thc utterance explicitly along with its linguistic structure ,nay bc unable to use it optimally. References [I] Brachman (]979) Rcseareh in Natural Language Understanding. Quarterly "['echnicai Progress Rcport No. 7. [k~It Beranek and Newman inc. [2] Davcy (1974) Discourse Production Ph.D. Dissertation. -Edinburgh University. [3] Goldman (1974) Compnter Generation of Natural I.anguage from a Deep Conceptual I'lase. memo AIM-247, Stanford Artificial Intelligence Laboratory. [41 McDonald. D.I). (1980) [.angu:tge Production as a Process of Decision-making Under Constraints. Ph.D. Di~cmttion. MIT, to appcar as a technical report from the MIT Artificial Intelligence Lab. [5] (in preparation) "1 .anguage Production in A.]. - a review", manuscript being revised ,'or publication. [6] Ross (1%8) Constraints on Vari-lMes in Syntax. Ph.D. Dissertation, Mrr. [7] Swat,out (]977) A Digitalis Therapy Advisor with F-xplanatlons Mastcr,J Dissertation, MIT. [8] Winograd 0.973) Understanding Natund language Academic Press. 62
1981
14
Analo~es in Spontaneous Discourse I Rachel Relc bman Harvard University and Bolt Beranek and Newman Inc. Abstract This paper presents an analysis of analogies based on observations of oatural conversations. People's spontaneous use of analogies provides Inslg~t into their implicit evaluation procedures for analogies. The treatment here, therefore, reveals aspects of analogical processing that is somewhat more difficult to see in an experimental context. The work involves explicit treatment of the discourse context in which analogy occurs. A major focus here is the formalization of the effects of analogy on discourse development. There is much rule-llke behavior in this process, both in underlying thematic development of the discourse and in the surface lir~ulstlc forms used in this development. Both these forms of regular behavior are discussed in terms of a hierarchical structurin6 of a discourse into distinct, but related and linked, context spaces. 1 Introduction People's use of analogies in conversation reveals a rich set of processing strategies. Consider the following example. A: B: C: I. I think if you're going to marry someone in the 2. Hindu tradition, you have to - Well, you - They 3. say you give money to the family, to the glrl, 4. but in essence, you actually buy her. 5. It's the same in the Western tradition. You 6. know, you see these greasy fat millionaires going 7. around with film stars, right? They've 8. essentially bought them by their status (?money). 9. HO, but, there, the woman is selllng herself. 10. In these societies, the woman isn't selling 11. herself, her parents are selling her. There are several interesting things happening in this exchange. For example, notice that the analogy is argued and discussed by the conversants, and that in the arEumentatlon C uses the close discourse deictlo "these" tO refer to the in~tlatlng subject of the a~alogy, and that she uses the far discourse delctlo "there" to refer to the linearly closer analogous utterances. In addition, notice that C bases her rejection ca a non- correspondence of relations effectlng the relation claimed constant between the two domains (women hei~ sold). She does not simply pick any arbitrary non- correspondence between the two domains. In the body of this paper, I address and develop these types of phenomena accompanying analogies in naturally ongoing discourse. The body of the paper is divided into four sections. First a theoretic framework for discourse is presented. This is followed by some theoretic work on analo~es, an integration of this work with the general theory of discourse proposed here, and an illuntratlon of how the II would llke to thank Dedre Gentner for many useful comments end discussions. integration of the different approaches explicates the issues under discussion. In the last section of the paper, I concentrate on some surface llngulstlo phenomena accompanying a oonversant's use of analogy in spontaneous discourse. 2 The Context Space Theory of Discourse A close analysis of spontaneous dialogues reveals that discourse processing is focused and enabled by a conversant's ability to locate ~ single frame of reference [19, 15, 16] for the discussion. In effective communication, listeners are able to identify such a frame of reference by partitioning discourse utterances into a hierarchical organization of distinct but related and linked context snaces. At any given point, only some of these context spaces are in the foreground of discourse. Foreg~ounded context spaces provide the ~eeded reference frame for subsequent discussion. An abstract process model of discourse generation/interpretation incorporatlng a hierarchical view of discourse has been designed using the formalism of an Augmented Transition Network (ATN) [29] 2 . The ~Ta~r encoding the context space theory [20, 22] views a conversation as a sequence of conversatlooal moves. Conversational moves correspond to a speaker's communioatlve goal vis-A-vis a particular preceding section of discourse. Among the types of conversational moves - speaker communicative goals - formalized in the grammar are: Challenge, Support, Future-Generallzation, and Further-Development. The correlation between a speakerPs utterances and a speaker's communicative goal in the context space grammar is somewhat s~m~lar to a theory of speech acts A la Austin, Searle, and Grloe [I, 2q, 9]. As in the speech act theory, a speaker's conversatloral move is recognized as a functional communicative act [q] with an associated set of preconditions, effects, and mode of fulfillment. However, in the context space approach, the acts recognlzed are specific to maxlm-abldlng thematic conversational development, and their preconditions and effects stem from the discourse structure (rather than from/on arbitrary states in the external world). All utterances that serve the fulfillment of a slng~le communicative goal are partitloned into a single discourse unit - called a context space. A context space characterizes the role that its various parts play In the overall discourse structure and it explicates features relevant to "well-formedness" and "maxim-abiding" discourse development. ~ine types of context spaces have been formalized in the grammar representing the different constituent types of a discourse. The spaces are characterized in much the same way as elements of a • Systemic Grammar" A la Halllday [10] via attributes represented as "slots" per Minsky [I~]. All context spaces have slots for the followlng elements: 2The rules incorporated in the grammar by themselves do not form a complete system of discourse generation/inter pretatlon. Rather, they enable specification of a set of high level Semantlc/log~Ical constraints that a surface lln~istlc from has to meet in order to fill a certain maxlm-abidlng conversational role at a given point in the discourse. 63 o a propositional representation of the set of functionally related utterances said to iie An the space; o the communicative goal served by the space; o a marker reflecting the influential status of the space at any given point in the discourse; o links to preceding context spaces in relatlon to which this context space wan developed; o specification at the relations involved. An equally important feature of a context space are its slots that hold the inferred components needed to recognize the communicative goal that the space serves in the discourse context. There are various ways to fulfill a given communicative goal, and usually, dependent on the mode of fulfillment and the goal in question, one can characterize a set of standardized implicit components that need to be inferred. For example, as noted by investigators of argumentation (e.g., [~, 23, 5, 22]), in interpreting a proposition as supporting another, we often need to infer some sot of mappings between an Interred generic principle of support, the stated proposition of support, and the claim being supported. We must also infer some general rule of inference that allows for conclusion a claim given the explicit statements of support and these inferred components. Reflecting this standardization of inferential elaborations, I have oategorlzed dlfferent types of context spaces based on communicative goal and method fttlftllment charaeterlzatlons (i.e., specification of specific slots needed to hold the standardized inferential elaboratlons particular tO a g~Lven goal and mode of fulfillment). Dellneatioo of context spaces, then, is functlomally based, and in the context space grammar, ImplAclt components of a move are treated an much a part of the discourse as those components verbally expressed. 3 The Analogy Conversational Move Znterpretlng/understanding an analogy obviously involves some inferenoing ca the part of a listener. An analogous context space, therefore, has some slots particular to it. The grammar's characterization of an analogous context space is derivative from its for~uLl analysis of an analogy oonversatlom-l move. 3.1 The Structure-Happing Approach Identification of those aspects of knowledge considered important in analogy seems to be of major cavern in current Investlgatlon of this cognitive task (e.g., [2, 3, 6, 7, 8, 11, 12, 13, 18, 25, 28]). GentnerJs ~ theory [6, 7, 8] seems most compatlble with the findings of the context space approach. Gentner argues that analogies aa-e based on an implicit understanding that "identical operations and relationships hol~ among non identical things. The relational structure is preserved, hut not the objects" [8, p.~]. Gentner's analysis can be used to explain B's analogy between the Hindu and Western traditions in Excerpt I. The relation ~ BUYING WOMAN. FOR $0~ COMPANION FUNCTION is held constant between the two doma/ns, and the appropriateness Of the analogy iS not affected, for instance, hy the noncorrespondlng political views and/or religions of the two societies. While Gentner cuts down on the number of correspondences that must exist between two domains for an analogy to be considered good, she still leaves open a rather wide set o£ relations that must seemingly be matched between a base and target domain. We need some. way to further characterize Just those relations that must be mapped. For example, the relation TRADING WITH CHINA is totally irrelevant to the Hindu-Western analogy in this discourse context. As noted by Lakoff & Johnson [12], metaphors simultaneously "highlight" and "hide" aspects of the two domains being mapped onto each other. The context space theory supplements both Lakoff & Johnson's analysis and the structure-mapplng approach in its ability to provide relevant relation characterization.. 3.2 The Context Space Approach In the context space theory, three elements are considered vital to analogy evaluation: o the structure mapping theory o relevant context identification o communicative goal identification The context space grammar's analysis of analogies can be characterized by the following: Explicating the connection between an utterance purportlng to make a claim analogous to another rests on recoghizlng that fc~. two propositions to be analogous, it anst be the cnse that they can bo ~h be seen an ~nstanc,s Of some more general claim, such that the predicates of all three propositions are identloal (i.e., relation identity), and the correspondent objects of the two domains involved are both subsets of some larger sot specified in this more general claim. Rejecting an analogy is based on specifylng some relation, RI, of one domain, that one implies (or claims) is not true in the other; or is based on specifying some non-ldentloal attrlbute-value pair ~'om whloh such a relation, RI, can be inferred. In both cases, RI oust itself stand in a 'CAUSE' relation (or soma other such relatlon 3) with one Of the relations explicitly mentioned in the creation of the analogy (i.e., one being held constant between the two domains, that we csul call RC). Furthermore, it must be the cnse that the communicative goal of the analogy hinges on RI(RC) being true (or not true) in both of the domains. 3.3 A-alogous Context Spaces Re£1ectlng this analysis of ~--!o~Les, all analogous context spaces have the followlng slot deflnltlons (among others). Abstract: This slot contains the generic proposltlon, P, of which the Inltlatlng and analogous claim are instances. Reflecting the fact that the same predication must be true of both cla.lms, 3Since aceordin~ to this analysis the prime focal point of the analogy is always the relations (i.e., "actions") being held constant, and a major aspect of an "action" is its cause (reason, intent, or effect of occurrence), a non~orrespondenoe in one of these relations will usually invalidate the point at the analogy. 64 Relations: Proposition: Mappings: the predicate in the abstract slot is fixed; other elements of the abstract are variables corresponding to the abstracted clansea of which the specific elements mentionod in the analogous and initiating clalms are members. The structure of this slot, reflecting this importance of relation identity, consists of two subslots: This slot contains a llst of the relations that are constant and true in the two domains. This slot contains the generic proposition defined in terms of the constant predicates and their variable role fillers. This slot contains a llst of lists, where each llst corresponds to a variable of the generic proposition, P, and the m-ppings of the objects of the domain specified in the initiating context space onto the objects specified in the analogous context space. 3.~ Communicative Goals Served by Analogies An analogY conversational move can carve in fulfillment of a number of different communicative goals. Major roles currently identified are: I. Means of Explanation 2. Means of Support 3- Means of Implicit Judgement (i.e., conveying an evaluative opinion on a given state-of-a/falrs by comparing it to a situation for which opinion, either positive or negative, is assumed generally shared) 4. Topic ShiSt by Contrast 5. Hemna for Future-GeneraLizatlon ~n maxlm-abldlng discourse, only elements felt to be directly analogous cr contrastlve to elements contained in the Inltiat~ng context space are discussed in the analogous space". Analogy construction entails a local shift in toplo, and, therefore, in general, a/tar discussion of the analogous space (iscluding its component parts, such as "supports-of," "challenges-of,, etc.), we have immediate resumption of the initiating context space. (When analogies are used for goals ~ & 5 noted above, if the analogy is accepted, then there need not be a return to the initiating space.) 3.5 Illuetratlon In this section, I present an analysis of an excerpt in which convereants spontaneously generate and argue about analog~les. The analysis hiEhlights the efficacy of inteKratlng the structure napping approach with r~e communicative gnal directed approach of the context space theory. The excerpt also illustrates the rule-llke behavior governing continued thematic development of a discourse after an analogy is given. Excerpt 2 is taken from a taped conversation between two friends, M and N, wherein M, a British citizen, is trying to explain to H, an American, the history cf the current turmoil in Ireland. The conversational moves involved in the excerpt (A & D being of the same category) are the following: A: ADalogy B: Challenge of Analogy C: Defense of A~alogy D: Alternate Aralogy E: Return to the initiating context space of the analogy; with the return belng in the form of a "Further-Development" (as signalled by the clue "sow"). H: N: M: N. M: N: M: N: M: N: I. And, of course, what's made it worse this tima 2. is the British army moving in. And, moving in, 3. in the first place, as a police force. It's 4. almost a Vietnam, in a way. 5. But, all within Northern Ireland? 6. All within Northern Ireland. Moving in as a 7. police force, belng seen by everybody as a 8. police force that was going to favor the 9. Protestants. 10. It'd rather be llke Syria being in Lebanon, 11. rlght? 12. I don't know enough about it to know, maybe. 13. There's - Where, there's a foreign police force I~. in one country. I mean, when you say it's llke 15. Vietnam, I can't take Vietnam. Vietnam is North 16. Vietnam and South Vietnam. 17. No, I meant war. You know, moving in and sayln6 18o we're a police action and actually flg~ting a war 19. when you got there. 20. Oh, well, that's Syria, that's obviously Syria, 21. rlght? Who are implicitly supporting - not 22. supportlng - 'cause actually it's very similar 23. in Lebanon, right? You have the Catholics and 2~. the Moslem. That's right, that's Lebanon. 25. I suppose, yes. 26. You have the Catholics and the Moslem, and then 27. Syria's eomlng in and implicitly supporting the 28. Moslem, because Syria itself is Moslem. 29. Now, England is Protestant? qOf course, digressions. this does not preclude explicitly noted 65 3.5.1 Analysis We ~an begin the analysis with a more formal chaFaoterlzatlon of M's analogy conversational move. The generic proposition underlying N's analogy: $Countryl 81 $Country2 $Countryl R2 NEthnioGroup2 Where, the constant relations are: R1: MOVE-IN-AS-~OLICE-FORCE H2: TAKE-SIDE-OF The objects sapped onto each other: Mappings1: England, America Mappings2: Ireland, Vietnam Mapping=S: Protestants, South Viet~amese The communicative goal served by the analogy: Negative Evaluation on England In rejection of the analogy, N claims that in the Vietnam case alone the following three relations occur: R3: FOREIGN INVASION Rq: AID AGAINST FOREIGN INVASION RS: CAUSE Where, R5 la a relation between relations, i.e., 83 CAUSE a~ 5 . The purpose of M's analogy is to hig~llght her negative assessment of England in the Ireland altuatlon (as identified by her utterance, "And, whatOs made it worse this tlme ..."). M attempts to accomplish this by mapplng the presummed acknowledged negative assessment of America in Vietnam onto England. Such a negative evaluative ~apping, however, can only occur of course if one oondenns America's involvement in Vietnam. N denies such a presummed negativity by arguing that it is possible to view America's involvement in Vietnam a~ coming to the aid of a country under foreign attack ~ (i.e., as a positive rather than a negative act). Thus, argues N, the "cause" relations of the acts being held constant between the two domain~ (i.e., enteran~e as a police force but being partisan) are quite different in the two cases. And, in the Vietnam case, the cause of the act obviates any common negativity associated with such "unfair police force treatment." There is no negativity of America to map onto England, and the whole purpose of the analogy has failed. Hence, according to 5Rq can be thought of as another way of loo~.ing at R1 and R2. Alternatively, it could be thought of as replacing RI and R2, since when one country invades another, we do ~ot usually co~slder third party intervention as mere "coming in as a polio= force and taklng the slde of," but rather as an entrance into an ongoing war. However, I think in one light one oen view the relations of 81 and R2 holding in either an internal or external war. 6Most criticisms of America's involvement in Vietnam rest on viewing it as an act of intervention in the internal affairs of a country agalnst the will of half oE its people. H, the analogy in thls discourse context is vacuous an~ warrants rejectlon 7. After N's rejection o~ M'e analogy, and N's offering o~" an alternative analogy , which is somewhat accepted by M ~ as predicted by the gr--~r's analysis of an analogy conversational move u~ed for purposes or evaluatlon/Justlficatlon, it is time to have ~he initiating subject of the analogy returned-to (i.e., i~ is time to return to the subject of Br~Italn's moving into Ireland) The return, on Line 28, in the for= of "Further- Development," constitutes a subordinating shaft ~rom dlsoussion of ~e event of the British a~my entering Ireland onto dlsousslon of England's underlying mot~ivatlons and reasons for engaging in this event. The form of return illustrates Lakoff & Johnson's notion of a metaphor creating new meanlngs for u~, and its ability to "induce new similarities" [12]. That is, it exemplifies a conversant's attempt to map new knowledge onto pre-existing knowledge of a domain based upon, and induced by, an analogy ,,,de to this domain. An appropriate extended paraphrase= of N's question on Line 28 is: "Okay, so we accept Syria's presence in Lebanon as a better analogy for England's presence in Ireland. Now, we know, or have Just shown that Syria's bias to the Moslems can be explained by the fact that Syria herself is Moslem. It has been stated that England, in a sinMilar sltuatl~R , is favoring Protestants. can we then carry • otlves'" over as well in the analogy? That is, can we then infer that England is Favoring the Protestants because she is Protestant?" 7In a different context, perhaps, i.e., had the analogy been cited for a different purpose, N may have accepted it. In addition, it is iaportant to recognize ~at though there are mmerous o~her non-correspondences between the Amlrioan-Vletnam and England-L-=land situations (e.g., the respective geographlc distances involved), N's randoa selection of any one of these other nonnorresponding relations (irrespective of thelr complexlty) would not have necessarily led to effeotlve communication or a reason to reject the analogy. 8N'= citing of this alternative analogy is supportive of the grammar's analysis that the purpose of an analogy is vital to Its acceptance, slope, it happens that N views Syria's intervention in Lebanon quite negatively: thus, her cho£ce of this domain where (An her view) is=re is plenty of negativity to ~p. 9Notice, by the way, that in tsr'~a of "at~ribute identity," Amities is a =mob closer latch to England than Syria la. This example supports the theory that "attribute identity" play= a milLimal role in analogy ~appings. 10The fact that M attempts to map a "cause" relation between the two domains, further supports the theory that it is correspondence of sohesatizatlon or relations between dosmins, rather than object identity, that is a governing criteria in analogy construction and evaluation. 66 Surface Lingulstlc Phenomena The rules of reference encoded in the context space grammar do not complement traditional pronominallzatlon theories which are based on criteria of recency and resulting potential semantic amblguities. Rather, the rules are more in llne with the theory proposed by Olson who states that "words designate, signal, or specify an intended referent relative to the set of alternatives from which it must be differentiated" [17, p.26~]. The context space grammar is able to delineate this set of alternatives governlng a speaker's choice (and listener's resolution) of a referring expresslon ;I by continually updating its model of the discourse based on its knowledge of the effects associated with different types of conversational moves. Its rule of reference, relevant to current discussion, is: Only elements in a currently active and controlling context space pair are in the set of alternatives vying for pronominal and close delctic referring expressions. The context space grammar continually updates its model of the discourse so that at any given point it knows which preceding utterances are currently in the active and controlling context spaces. Discourse model updating is governed by the effects of a conversational move. Major effects of most conversational moves are: o changes to the influential statuses of preceding context spaces; changes to focus level assignments of constituents of the utterances contained in these Spaces; o establlshment of new context spaces; the creation of outstanding discourse expectations corresponding to likely subsequent conversational moves. The effects of initiating an analogy conversational move are to: put the initiating context space in a Controllln~ state (denoting its foreground role during the processing of the analogous space); o create a new Active context space to contain the forthcoming analogous utterances; create the discourse expectation that upon completion of the analogy, discussion of the initiating context space will be resummed (except in cases of communicative goals q and 5 noted above). Endin~ an analogy conversational move, makes available to the grammar the "Resume-lnitlatlng" discourse expectation, created when the analogy was first generated. The effects of choosing this discourse expactation are to: 11Lacking from thls theory, however, but hopefully to be included at a later date, is Webber's notion of evoked entities [27] (i.e., entities not previously mentioned in the discourse but which are derivative from it - especially, quantified sets). 67 o Close the analogous context space (denoting that the space no longer plays a foreground discourse role); o reinstantlate the initiating context space as Active. Excerpt 3 illustrates how the grammar's rule of reference and its updating actions for analo@les explain some seeming surprising surface linguistic forms used after an analo~ in the discourse. The excerpt is taken from an informal conversation between two friends. In the discussion, G is explaining to J the workings of a particle accelerator. Under current discussion is the cavity of the accelerator through which protons are sent and accelerated. Particular attention should be given to G's referring expressions on Line 8 of the excerpt. Excerot G: j. G: j. G: I. It's just a pure electrostatic field, which, 2. between two points, and the proton accelerates 3. through the electrostatic potential. ~. Okay. 5. Same physical law as if you drop a ball. It 6. accelerates through a gravitational potential. 7. Okay. 8: And the only important point here is that 9. the potential is maintained with this 10. Cockcroft-Walton unit. Lines 1 - 3: Lines 5 - 6 : Lines 8 - 10: Context Space CI, The Initiating ~pace. Context Space C2, The Analogous Space. Context Space CI, The Resumption. On Line 9, G refers to the "electrostatic potential" last mentioned on Line 3. with the unmodified, close deictlc referring expression 12 "the potential," despite the fact that lntervening~ty on Line 5 he had referenced • gravitational potential," a potential semantic contender for the unmodified noun phrase. In addition, G uses the close delctic "here" to refer to context space CI, though in terms of linear order, context space C2, the analogous context space, is the closer context space. Both these surface linguistic phenomena are explainable and predictable by the context space theory. Line 8 fulfills the discourse expectation of resummlr~ discussion of the initiating context space of the analogy. As noted, the effects of such a move are to close the analogous context space (here, C2) and to reassign the initiating space (here, CI) an active status. As noted, only elements of an active or controlling context space are viable contenders for pronominal and close deictlc references; elements of closed context spaces are not. Hence, despite criteria of recency and resulting potentials of semantic ambiguity, G's references unambiguously refer to elements of CI, the active foregrounded context space in the discourse model. As a second example of speakers ualng close deictlcs to refer to elements of the initiating context space of an analogy, and corresponding use of far deictics for elements of the analogous space, lets re-consider Excerpt 1, repeated below. 12Th e grammar considers nThe X" a close deictlc reference as it is often used as a comple~ment to "That X," a clear far deictic expression [21] A: B: C: I. I think if you're going to marry someone in the 2. Hindu tradition, you have to - Well, you -They 3. say you give money to the family, to the glrl, q. but in essence, you actually ~uy her. 5. It's the same in the Western tradition. You 6. know, you see these greasy fat millionaires going 7. around with ~ilm stars, right? They've 8. essentially bought them by their status (?money). 9. No, but, there, the women is selling herself. 10. In these societies, the woman isn't selling 11. herself, her parents are selling her. Lines ; - 5: Context Space CI, The Initiating Space. Lines 5 - 8: Context Space C3, The Analogous Space. LAnes 9 - 11: Context Space C3, The Challenge Space. On Line 9, C rejects B's analogy (as signalled by her use Of the clue words, "t~o, but") by citing a nonoorrespondence of relations between the two domains. Notice that in the rejection, C uses the far daictic • there = to refer to an element of the linearly close analogous context space, C2,t3 and that she uses the clone de~ctlc "these" to refer to an 1~lement~ of the linearly far initiating context space, CI . The grnmm"r models C's move on Line 9 by processing the • Challenge- Analogy-Hap plngs" (CAM) conversational move defined in its discourse network. This move is a subcategory of the grammar' s Challenge move category. Since this type of analogy challenge entails contrasting constituents of both the initiatlng and analogs context spaces'% the grammar must decide which of the two spaces should be in a controlling status, i.e., which space should serve as the frame of reference for subsequent processing. Reflecting the higher influential status of the initiating context space, the grammar chooses it as its reference frame Is. As such, on its transition path for the CAM move, move, the gr-mnutr" 13This conversation was recorded in Switzerland, and in terms of a locative use of delctics, Western society is the closer rather than Hindu society. Thus, the choice of deict£c cannot be explained by appeal to external reference criteria. 1~Notlce, however, that C does not use the close " delctlc "here," though it is a better contrastlve term with "there" than is =these." The rule of using close delctlcs seems to be slightly constrained in that if the referent of "here = is a location, and the s~aker is not in the location being referenced, then, s/he cannot use • here." 15Zn a different type of analogy challenge, for example, one could simply deny the truth of the smalo~us utterances. 16Zn the canes of Pre-Generalizatlon and Topic- Contrast-Shlft analogies, it is only after the analogy has been accepted that the analogous space is allowed to usurp the foreground role of the initiating context space. O puts the currently active context space (i.e., the analogous context space) in a state (reflecting its new background role); c leaves the initiating space in its Controlling state ( I. e., it has been serving as the reference frame for the analogy); o creates a new Active context space in which to put the challenge about to be put forward. Performing such u~latlng actions, and using £ts rule that only elements in a controlling or active space are viable contenders for close delotlc and pronominal references, enables the grammar to correctly model, explain, and predict C's reference forms on Lines 9 11 of the excerpt. 5 Conclusion In this paper I have offered a treatment of analogies within spontaneous dlalo6ues. In order to do thls I first proposed a context space model of discourse. In the model discourse utterances are partitioned into discrete discourse units based on the communicative goal that they serve in the discussion. All communicative acts effect the precedlng discourse context and I have shown that by tracking these effects the grammar can specify a frame of reference for subsequent discussion. Then, a structure-~applng approach tO analogies was discussed. In this approach it is claimed that the focus of an analogy is on system~ of relatlonships between objects, rather than on attributes of objects. Analysis of naturally occurring analogies supported this claim. I then showed that the context space theory's communicative goal analysis of discourse enabled the theory to go beyond the struoture-mapplng approach by providing a further specification of waich klnds of relationships are most likely to be Included in description of an analogy. • Lastly, Z presented a number of excerpts taken from naturally ongoing discourse and showed how the context space analysis provided a cogent explanation for the types Of analogies found in dlsoouree, the types Of reJemt£ons given tO them, the rule-like thematic development of a dlsoourse after an a~alogy, and the surface llngulstlc forms used in these development. In conclusion, analyzing speakers spontaneous generation of analogies and other conversants' reaotlons to them, provides ua an usually direct form by which access individuals' implicit criteria for analogies. These exchanges reveal what conversants believe analogies are responsible for and thereby what i~ormatlon they need to convey. 68 zEvv~ENCES 1. Auatln J.L. ~ow To Do T~n~s With Words. Oxford University Press, 1962. 2. Black M. Metaphor revisited. Metaphor and Thought,1979. 3. Carbonell J. Metaphor - A key to extenelble semantic analysis. Proceedings of the 18th Annual Meeting of the ACL, 1980, pp. 17-21, 4. Cohen P. PerrauZt R. Elements of a plan-ha~ed theory of speech acts. ~ ~aienee i (1979), 177-212. 24. Searle J.B. Speech Acts. Cambridge University Press, 1969. 25. Sternberg R.J. Component processes in analogical reasoning. Psvcholo~c~l Review 8~ (1977), 353-378. 26. Toulmln S. The Uses 9£ Arlene. Cambridge University Press, 1958. 27. Webber B. ~ formal aoDraoah tO discourse ana~hora. Ph.D. Th., Harvard University, 1978. 28. Winston P.H. Learning and reasoning by analogy. ~. Cohen R. Understanding arguments. CSCSI, Canadian Society for Computational Studies of Intelligence, 1980. 6. Gentner D. The structure of analogical models in science. ~51, Bolt Beranek and Newman Inc., 1980. 7. Gentner D. Metaphor as structure - preserving mapping. Proceedings American Psychological Association, 1980. 8. Gentner D. Are scientific analogies metaphors? Problems and Perspectives, 1981. 9. GFlce, H. P. Logic and conversation. Syntax and Semantics, 1975 • 10. H~liday M. A. K. Options and functions in the english clause. 8RNO Studies in Enmllsh 8 (1969), . 11. Hobbs J. Metaphor schemata, and selective inferenolng. Start[oral Research Isstltute, 1979. 12. LakofT G. Johnson M. Metavhor§ We Live BY. The University of Chicago Press, 1980. 13. Miller G.A. Images and models: Similes and metaphors. Metaphor and Thought,1979, pp. 202-250. I~. Minsky M. A framework for representing knowledge. The Psychology of Computer Vision, 1975. 15. Neisser U. ~ ~svcholoav. Meredith Publishing Company, 1967. 16. Lindsay P. Norman D. Human ~ Fr~cessln=. Academic Press, 1972. 17. Olscn D. Language and thought: aspects of a cognitive theory of semantios. ~ Review 77, " (1970), 2~7-273. 18. Ortony A. Beyond literal similarity. Psychological Review 86 (1979), 161-180. Proceedings of the ~ of /,~ ACM ~l, 12 (1980), . 29. Woods W. A. Transition network grammars for natural language analysis. Comm. ACM // ( 1970), 591-606. 69 ¢
1981
15
I~IGATION OF PROCESSING STRATEGIES FOR THE STRUCTURAL ANALYSIS OF ARGOMF/Trs Robin Cohen Department of Computer Science University of Toronto Toronto, Canada M5S IA7 2. THE UNDERSTANDING PROCESS This paper outlines research on processing strategies being developed for a language understanding systerN, designed to interpret the structure of arguments. For the system, arguments are viewed as trees, with claims as fathers to their evidence. Then understanding becomes a problem of developing a representative argtmlent tree, by locating each proposition of the argument at its appropriate place. The processing strategies we develop for the hearer are based on expectations that the speaker will use particular coherent transmission strategies and are designed to be fairly efficient (work in linear time). We also comment on the use by the speaker of linguistic clues to indicate structure, illustrating how the hearer can interpret the clues to limit his processing search and thus improve the co~lexity of the understanding process. 2.1 PROCI.~ING S'I~AT~GIES To prOcess an argument, each proposition is analyzed in turn. It is convenient to think of the representation for an argument as a tree with claims as fathers to their evidence. The speaker thus has a particular tree structure for the argument which he tranm~its in some order. The hearer must take the incoming stream of propositions and re-construct the logical structure tree. Although the speaker has available a wide variety of possible transmission algorithms, we claim that only a small n,~ber of these will be used. We look for tranm~ission algorithms that have associated reception algorithms such that both S and H can process in a reasonable amount of time. Consider the following strategies= i. BACKC4~DUND This paper focuses on one aspect of an argument understanding system currently being designed. An overview of the initial design for the system can be found in [Cohen 88]. In general, we are examining one-sided arguments, where the speaker (S) tries to convince the hearer (H) of a particular point of view. We then concentrate on the analysis problem of determining the overall structure of the argtm~nt. Considering an argument as a series of propositions, the structure is indicated by isolating those propositions which serve as CLAIMS and those which serve as EVIDENCE for a particular claim, and by indicating how each piece of evidence sup~orta its associated claim. A proposition E is established as evidence for a proposition C if they fit appropriate slots in one of the system frames representing various logical rules of inference, such that E is a premise to C's conclusion. For example, E will be evidence for C according to modus ponens if E-->C is true.. Establishing evidence is a complex process, involving filling in missing premises and recognizing the logical connection between propositions. In any case, our research does focus on reconstructing this logical form of the argument, aside from judgments of credibility. The initial design [Cohen 8g] adopts an unsophisticated processing strategy: each proposition is analyzed, in turn, and each is tested out as possible evidence for every other proposition in the argument. The current design seeks to imprOve that basic strate< ! to a selective process where the analysis for a given proposition is performed with respect to the interpretation for the overall argument so far. So, only particular propositions are judged eligible to affect the interpretation of the proposition currently being analyzed. Currently, we assume an "evidence oracle" which, given two propositions, will decide (yes or no) whether one is evidence for the other. With this "accepted" authority, a representation for the argument can be built as the analysis proceeds. (The design of the oracle is another research area altogether, not discussed in this paper). a) 9RE-ORDER The most straightforward transmission for an argL~nent is to present a claim, followed by its evidence, where any particular piece of evidence may, in turn, have evidence for it, following it. A sample tree (numbers indicate order of propositions in the transmitted stream) is: 4 6/5~/ In this kind of argtmlent, every claim precedes its evidence. Thus, w~en the hearer tries to find an interpretation for a current proposition, he must only search prior propositions for a father. The reception algorithm we propose for H is as follows: to interpret the current proposition, NE~, consider the proposition immediately prior to it (call it L for last). I) Try out NEW as evidence for L . 2) If that fails, try NER as evidence for each of L's ancestors, in turn, up to the root of the tree. (NEW's father must exist somewhere on this "right border" of the tree). When the location for NEW is found, a node for it is added to the tree, at the appropriate place. b) 9OST-ORDKR Here, each claim is preceded by its evidence. This is a little more complex for the hearer because he may accept a whole stream of propositions without knowing how they relate to each other until the father for all of them is found. Exa~le: . 9,-~ The reception for H must now make use of the tree for the argument built so far and must keep track of propositions whose interpretation is not yet known, 9ending the appearance of their father. The formal reception algorithm will thus make use of a stack. Consider L to be the top of the stack. To interpret the current proposition NEW do the following- I) See 71 if NEW ~ets evidence from L (i.e. is claim for L). 2al If L is evidence, keep popping off elements of the stack that are also sons and push the resulting tree onto the stack. 2b) Otherwise, push ~ onto the stack. In short, search for sons: when one son is found, all of them can be picked up. Then the father must stack up to De evidence for same future proposition. c) HYBRID Pre-order and post-order are two consistent strategies which the hearer can recognize if he expects the argument to conform to one or the other transmission rules, throughout. But an argument essentially consists of a series of sub-arguments (i.e. a claim plus its evidence). And the Speaker may thus decide to transmit some of these sum-arguments in pre-order, and others in post-order, yielding an overall h~rid argument. Therefore, the hearer must develop a more general processing strategy, to recognize hybrid transmission. The reception algorithm now is a c~mDination of techniques from a) and b). Exam-ple: ,~... 23 ,6~ (EX 3) 45 But there are additional complications to processing in this model - for example, transitive evidence relations. In KX 3, 4 and 5 are evidence for 1 (since 4 and 5 are evidence for 6 and 6 is evidence for i), so they will De attached to I initially. Then, to process 6, H must attach it to i and pick up 4 and 5 as sons. So, the hybrid algorithm involves recovering descendants that may alreaay De linked in the tree. Here is a more detailed description of the algorithm: We maintain a dummy node at the top of the tree, for which all nodes are evidence. Consider L to De a pointer into the tree, representing the lowest possible node that can receive more evidence (initially set to dummy). For every node NEN on the input stream do the following: forever do (B0:) if NEW evidence for L then (Sl:) if no sons of L are evidence for NEW then /* just test lastson for evidence */ (BII:) attach NEW below L (Bl2:) set L to NEW exit forever loop (B2:) else (B21:) attach all sons of L which are evidence for NEW below NE~ /* attach lastson; bump ptr. to lastson */ /* back I and keep testing for evidence */ (B22:) attach NE~ below L exit forever loop (B3:) else set L to father(L) end forever loop This hyt)rid model still accounts for only sc~e of many possible argtm~ent configurations. But we claim that it is a good first approximation to a realistic and efficient processing strategy for arguments is general. It captures the argument structure a hearer may expect from a speaker. Some of the restrictions of this model include: (i) importance of the last proposition before NEW in the analysis of NEW; (2) preference for relations with propositions closer to NEW; (3) considering only the last brother in a set of evidence when NEW seeks to relate to prior propositions. Note then that we do not expect to add evidence for a brother or uncle of L - these nodes are closed off, as only the last brother of any particular level is open for further expansion. To determine the appropriateness of this algorithm as a general strategy, we are currently investigating the i~l ications of restricting expected argtnnent structures to this class and the complexity in co~.re/~ension caused Dy other transmission me,hods. Now, the reception algorithms we develop for a), b), and c) can all be shown to ~ork in linear time (the n~r of evidence relations to be ~ested will be proportional to the numDer of nodes in the tree) [see Appendix] but not in real time (can have aDritrarily long c~ains in any suD-argtmlent). Yet hearers process argt~nents well and this, we claim, is because the speaker helps out, providing special clues to the structure. 2.2 LINGUISTIC CLUES Special words and phrases are often used Dy the speaker to suggest the structure of the argument. One main use of clues is to re-direct the hearer to a particular proposition. Phrases like "Let us now return to..." followed Dy a specific indication of a prior topic are often used in this respect. In EX l, if 8 is preceded Dy a clus suggesting its link to i, then the hearer is spared the long chain of trying 8 as evidence for 7, 5 and 3. So, linear time algorithms can become real time with the aid of clues. But clues of re-direction may also occur to maintain poorly structured arguments - i.e. the speaker can re-direct the hearer to parts of the argument that were "closed off" in his processing. In certain cases, expectations are then set up to address intermediary propositions. We are developing a detailed theory of how to process subsequent to re-direction. Another use of clues is to indicate boundaries. In EX 3, if a phrase like "We now consider another set of evidence for (i)...= preceded 4, it would be easier for H to retrieve 4 and 5 as sons to 6 (without checking 3 as well). Explicit ~rases a~out relations between propositions are only one type of clue. There are, in ~ition, Special words and phrases with a function of connectir~ a proposition to some preceding statement. These clues aid in the processing of an arg~uent by restricting the possible interpretation of the proposition containing the clue, and hence facilitating the analysis for that proposition. As outlined in section 2.1, the analysis of a proposition involves a constrained search through the list of prior propositions. With these clues, the search is (i) guaranteed to find ~ prior proposition wtlic~ relates to the one with the clue (2) restricted even further due to the semantics of the clue as to the desired relation between the prior and current proposition (e.g. MUSt be son, etc.). We develop a taxonomy of connectives ~ised on the "logical connectors" listed in (Quirk 721, and assign an interpretation rule to each class. Notation: in the following discussion S represents the proposition with the connective clue, and P represents the prior proposition ~nich "connects" to $. 72 Smeary: CATSGORY RELATICN:P to S EXAMPLE parallel brother "Secondly" inference son "As a result" detail father "In particular" summary multiple sons "In conclusion" reformulation son A~D father "In other words" contrast Son OR brother "on the other hand" Remark: The examples in the following discussion are intended to illustrate the processing issues in argument analysis. We are examining several real life examples from various sources (e.g. rhetoric books, letters to the editor, etc.) but these introduce issues in the operation of the evidence oracle, and so are not shown here. i) Parallel: This category includes the most basic connectors like "in addition" as well as lists of clues (e.g. "First, Secondly, Thirdly,..etc."). P must be a brother to S. Since we only have an oracle which tests if A is SON of B, finding a brother must involve locating the crayon father first. EX 4: l)The city is in serious trouble rl\ 2)There are sc~e dangerous fires going 2 4 3)Three separate blazes have broken out ~ 3 4)In addition, a tornado is passing through The parallel category has additional rules for analysis in cases where lists of clues are present. Then, all propositions with clues from the same list must relate. But we note that it is not always a brother relation between these specific propositions. The relation is, in fact, that the brothers are the propositions which serve as claims in each sub-argtm~ent controlled by a list clue. EX 5: l)The city is awful 1 2)First, no one cleans the parks ~\ 3)So the parks are ugly 3 4 4)Then, the roads are ugly, too / \ 5)There's always garbage there 2 5 Here, 2 and 4 contain the clues, but 3 and 4 are brothers. 2)Inference= Here, P will be son for S. EX 6: 2)Peoplel)The firearedeStroyedhomelesshalf the city 12/3 3)As a result, the streets are crow~ed 1 Here, the interpretation for 3 only looks to be father to2. 3)Detail: Here, P will be father to S. EX 7: l)Sharks are not likeable creatures I~ 2)They are unfriendly to human beings 3)In particular, they eat people 3 Here, 3 finds 2 as its father. 4)Summary: We note that some phrases of summary are used in a reformulation sense and would be analyzed according to that category's rules. These are cases where the summarizing is essentially a repeat of a proposition stated earlier. A "summary" suggests that a set of sons are to be found. F~ 8: l)The benches are broken 4 2)The trails are choppy /[~ 3)The trees are dying 1 2 3 4) In stY, the park is a mess But sometimes, )=he "multiple" sons are not brothers of each other. EX 9: l)The town is in danger 4 2)Gangs have taken over the stores I 3)The police are out on strike /i\ 4)In stm~, we need protection 2 3 The interpretation rule for summary would follow the general reception algorithm to pick up all sons at the same level. 5)Reformulation: When a clue indicates that S is essentially "equivalent" to some P, P must satisfy the test for both son and father. To represent t/~is relation, we may need an extension to our current tree model (see Section 3 - Future Work). EX 10: l)We need money 2)In other words, we are broke 6)Contrast: This category covers a lot of special phrases with different uses in arguments, we have yet to decide how to optimally record contrastive propositions. For now, we'd say that a proposition which offers contrast to some evidence for a claim is (counter) evidence for that claim, and hence S is son of P. And a proposition which contrasts another directly, without evidence being presented is a (counter) claim, and hence S is a brother to 9. EX II: l)The city's a disaster 1 2)The parks are full of uprooted trees \~ 3)But at least the playgrounds are safe 2 3 Here, 3 is counter evidence for 1 EX 12: 1)The city is dangerous ~5~ 2)The parks have muggings 3)But the city is free of pollution 4 3 1 4)And there are great roads / 5)So, I think the city's great 2 Here 3 and 1 are brothers There are a lot of issues surrounding contrast, some of which we mention briefly here to illustrate. One question is how to determine which proposition is "counter" to the rest of the argument. In EX 12, the proposition with the clue was not the contrastive statement of the argument. So, it is not straightforward to expand our simplified recording of contrast statements to add a "counter" label. Another feature is the expectations set for the future when contrast appears. Sometimes, more evidence is expected, to weigh the argument in favour of one position over another. If these expectations are characterized, future processing may be facilitated. This description of connective clues is intended to illustrate some of the aids available to the hearer to restrict the interpretation of propositions, we are still working on complete descriptions for the interpretation rules. In addition, we intend each class to be distinct, but we are aware that some English phrases have more than one meaning and may thus be used in more than one of the taxonomy's categories. For these cases, the union of possible restrictions may have to be considered. 2.3 IMPLICATIONS OF THIS ANALYSIS DESIC~ Our description of various processing strategies and clue interpretations can be construed as a particular 73 theory of how to process arguments. The hearer expects the speaker to conform to certain tranmnission strategies - i.e. does not expect a random stream of propositions. But, H may be confronted with re-directions in the form of special clues, which he interprets as he finds. And he may limit his searching and testing by interpreting clues suggesting either the kind of relation to search for (evidence for, claim for) or the specific propositions to check. The theory thus proposes a particular selective interpretation process, the techniques are given a formal treatment to illustrate their complexity, and the special markers confronted in analysis are assigned a functional interpretation - to improve the ccm~)lexity of the understanding task. A note here on the "psychological validity" of our model: we have tried to develop processing strategies for arguments that are consistent with our intuitions on how a hearer would analyze and that function with a realistic complexity. But, we make no claims that this is the way all humans would process. 3. ~ CONSIDERATIONS One area we have not discussed in this paper is that of establishing the evidence relation. For now, the problem is isolated into the "evidence oracle = which performs the necessary semantic processing. In the future, we will give more details on the complexities of this module and its interaction with the general processing strategy described here. There are, as well, several i~provements in processing techniques to consider. Here are some ongoing projects - i) Investigation of other possible argument structures . not included here. The most obvious case to consider is: a claim, both preceded and followed by evidence for it. This is a reasonable tran.maission to expect. We are working on extensions to the hybrid algorit~ to accept these configurations as well. One interesting issue is the necessity for linguistic clues with argument structures of this type - to make sure the hearer can pick up additional evidence and recognize where the next suJo-argument begins. 2) Expanding the existing representation model to handle other complications in arguments. In particular, there a~e several different types of multiple roles for a proposition, which ~Jst all be handled by the theory. These include: (i) Proposition is both claim and evidence. (This is already arx:x:uKxlated in our current tree design, where a node can have father and sons). (ii) Proposition is both claim and evidence for the same proposition - i.e. two "equivalent" propositions in the argument. (iii) Proposition is claim to several other propositions. (Again, currently acceptable as father can have any number of sons). (iv) Proposition (E) is evidence for more than one proposition. If all the claims form an ancestral chain - father, grandfather, great-grandfather, etc. then this is just the transitive evidence relation discussed previously and handled by the current strategy. In other cases, (for example, when the -..laims are brothers) the hearer may not recognize the multiple cole in all possible tranmuissions. For instance, a tranmuission of claiml, E, then claim/ seeus comprehensible. But if the hearer received them in the order: claiml, claim/, then E - would he recover the role of E as evidence for claiml? 3) Trying to characterize the ~,~lexity of various argument configurations. Certain combinations of pre and poet order seem less taxing to the hearer. We are examining the cases where complexity problems arise and linguistic clues become more prevalent. 4. NELATED WORK Alt~.,ugh our research area may be considered largely unexplored (examining a specific kind of conversation (the argument), concentrating on structure, and developing formal descriptions of processing), there are some relevant references to other work. In [Ho~os 8%] Hotels states that "T~e proOl~m of AI is how to control inferencing and oti~er search processes, so that the best answer will be found within the resource limitations." We share this oommittment to designing natural language understanding systams w~ich perform a selective analysis of the input. The actual restrictions on processing differ in various existing syste~ according to the language tasks and the underlying representation scheme. In [Grosz 77] focus spaces are used to search for referents to definite noun ~rases (and to solve other linguistic problems). These spaces of objects are arranged to form a hierarchy with an associated visibility lattice, based on the underlying structure of the task of the dialogue. O~r tree representation is also a-'~erarchical structure and the description of propositions eligible to relate to the current one may be viewed as a visibility requirement on that hierarchy. So, the restrictions to processing in both our systems can be described similarly, although the details of the design differ to accommodate our different research areas. In So.bank's work on story understar~ing (e.g. [Schank 75]) snerentyped scripts are used to limit processing. Here, a given proposition is analyzed by tryir~ to fit with expectations for content generated by slota of the script not yet filled. With arguments, we cannot predict future content, so we design expectations that future propositions will have a particular structure with respect to the text so far. These are in fact expectations for coi~erent transmission. Schan~'s expectations for coherence, on the other hand, are coincident with his expectations for content, driven by scripts. Our actual design for restricting analysis is similar in many respects to Hotels' work on coherence relations ( [HobbS 76], [Ho~s78]). In this work, the representation for the text is also a tree, but the connections between nodes are coherence relations - subordinating relations between father and son, and co-ordinating relations between brothers. In C~?~,,on to both designs is the proposal to construct restricted lists of propositions eligible to relate to a current proposition. In our case, the relations between nodes in the tree is quite different (claim, evidence), although the description for the restricted set turns out to be the same - nawely, the right border of the tree. In ~__~Npbs_ ' system, the search for an interpretation is narrowed by proceseing a "goal list" of desired relations to existing propositions. We do not have a goal list to order our search, but merely a list of eligible propositions and an ordering of these 5ased on proxi~ty to the current proposition. But we also furnish some motivation for the construction of the eligible list - naDely, from the bearer's expectations about transmiseion strategies used by the speaker. In addition, Ho~ mentions that a few special words initiate specific goals (for example, "and" suggests temporal succession, parallel or possibly contrast). In our system we also discuss the restrictions to processing furnished by clues but i) we define the corpus of clues more clearly, indicating several types 74 and their associated restrictions and 2) we make clear the relation between restrictions from clues and the general processing strategy - that analysis picks up clues first, and resorts to general techniques otherwise. Furthermore, we show that a) most classes of clues are simply a restriction on the list of eligible propositions proposed for a general processing strategy and b)certain types of clues may override the general restrictions of the eligible list (e.g. re-directing the hearer explicitly). I am gz ~teful to Ray Perrault and their suggestions for this paper. Alex Borgida for BIBLIOGRAPHY [Cohen 80] ; Cohen, R. ; "Understanding Arguments"; Proceedings of CSCSI/SCEIO Conference 1988 [Grosz 77] ; Grosz, B.: "The Representation and Use of Focus in Dialogue Understanding"; SRI Technical Note No. 151 [Hobbs 76] ; Hobbs, J. ; "A Computational Approach to Discourse Analysis"; Dept. Computer Sciences, CUNY Research Report NO. 76-2 [Hobbs 78]; Ho~s, J.; "Why is Discourse Coherent?"; SRI International Technical Note NO. 176 [Hobbs 8@] ; Hobbs, J. "Selective Inferencing"; Proceedings of CSCSI/SCEIO Conference 198~ [Quirk 72] ; Quirk, R. et al; A Granmar of Contemporary English; Longmans Co. ; London [Schank 75] ; Schank, R. ; "SAM A Story Understander"; Yale Research Report NO. 43 APPENDIX Complexity arguments: PIIE and POST ORDER: Any node of the tree is tested to be claim a ntm~er of times = #of its sons + 1 more failing test. Now, total tests for claim - "Sum over i" (#sons(i) +I) where i runs over all nodes of the tree, which = "Sum over i"(#sons(i)) + n. But total #sons < total #nodes of tree (no multiple fathers). So total < 2n = O(n). HYBRID: We measure the complexity of processing all the nodes in the tree, by showing that the #times the algorit/~n (see section 2.1 for notation) runs through BI, B2 and B3 in total = O(n). Hypothesis: No node gets attached to another more than twice Proof: Each NEW gets attached once initially, either at BII or B22. Once attached, it can only be moved once - in B21, if it is son to current NEN. Once it is moved, it is no longer a son of the current L (since L doesn't change in B2) and can never be son of L again (since L only goes down tree in BI2, so never to a previously attached node). Conclusion: all attachments together are O(n) Now then, BII + B22 together are only executed O(n) times - they perform initial attachments. And B12 + B21 must thus also be O(n) - i.e. #times through branches B1, B2 together is O(n). Now consider B3: here n goes up the tree. But n can only go up as often as it goes down and #moves down tree is O(n) as per BI2, so B3 is O(n). (Note: #tests performed in operations in the forever loop is also O(n) tests in B@, B1 are just a constant additive factor; #tests in B21 (see comment statement) is < 2#attachments in B21). 75
1981
16
What's Necessary to Hide?: Modeling Action Verbs James F. Alien Com purer Science 1)epartmen t University of Rochester Rochester, NY 14627 Ahstract This paper considers what types of knowledge one must possess in order to reason about actions. Rather than concentrating on how actions are performed, as is done in the problem-solving literature, it examines the set of conditions under which an action can be said to have occurred. In other words, if one is told that action A occurred, what can be inferred about the state of the world? In particular, if the representation can define such conditions, it must have good models of time, belief, and intention. This paper discusses these issues and suggests a formalism in which general actions and events can be defined. Throughout, the action of hiding a book from someone is used as a motivating example. I. Introductio, This paper suggests a formulation of events and actions that seems powerful enough to define a wide range of event and action verbs in English. This problem is interesting for two reasons• The first is that such a model is necessary to express the meaning of many sentences. The second is to analyze the language production and comprehension processes themselves as purposeful action. This was suggested some time ago by Bruce [1975] and Schmidt [1975]. Detailed proposals have been implemented recently for some aspects of language production [Cohen, 1978] and comprehension [Alien. 1979]. As interest in these methods grows (e.g., see [Grosz, 1979; Brachman, 1979]). the inadequacy of existing action models becomes increasingly obvious. The formalism for actions used in most natural language understanding systems is based on case grammar. Each action is represented by a set of assertions about the • semantic roles the noun phrases play with respect to the verb. Such a tbrmalism is a start, but does not explain how to represent what an action actually signifies. If one is told that a certain action occurred, what does one know about how the world changed (or didn't change!). This paper attempts to answer this question by oudining a temporal logic in which the occurrence of actions can be tied to descriptions of the world over time. One possibility for such a mechanism is found in the work on problem-solving systems (e.g. [I:ikes and Nilsson, 197]; Sacerdoti, 1975]), which suggests one common formulation of action. An acuon is a function from one world state to a succeeding world state and is described by a set of prerequisites and effects, or by decomposition into more primitive actions. While this model is extremely useful for modeling physical actions by a single actor, it does not cover a large class of actions describable in I-ngiish. [:or instance, many actions seemingly describe nml-activity (e.g. standing still), or acting in some non- specified manner to preserve a state (e.g. preventing your televismn set from being stolen). Furthermore, many action descriptions appear to be a composition of simpler actions that are simultaneously executed. For instance, "Walking to the store while juggling three bails" seems to be composed of the actions of "walking to the store and "juggling three bails." It is not clear how such an action could be defined from the two simpler actions if we view actions as functions from one state to another. The approach suggested here models events simply as partial descriptions of the world over some Lime interval. Actions are then defined as a subclass of events that involve agents. Thus, it is simple to combine two actions into a new action, The new description simply consists of the two simpler descriptions hglding over the same interval The notions of prerequisite, result, and methods of performing actions will not arise in this study. While they are iraportant for reasoning about how to attain goals, they don't play an explicit role in defining when an action can be said to have occurred. To make this point clear, consider the simple action of turning on a light. There are few physical activities that are a necessary part of performing this action, Depending on the context, vastly different patterns or" behavior can be classified as the same action, l;or example, turning on a light usually involves Hipping a light switch, but in some circumstances it may involve tightening the light bulb (in the basement). or hitting the wail (m an old house). Although we have knowledge about how the action can be pertbrmed, this does nol define what the action is. The key defining characteristic of turning on the light seems to be that the agent is performing some activity which will cause the light, which is off when the action starts, to become on when the action ends. The importance of this observation is that we could recognize an observed pattern of activity as "turning on the light" even if we had never seen or thought about that pattern previously. The model described here is in many ways similar to that of Jackendoff [1976]. He provides a classification of event verbs that includes verbs of change (GO verbs) and verbs that assert a state remaining constant over an interval of time (STAY verbs), and defines a representation of action verbs of both typesby introducing the notion of agentive causality and permission. However, Jackendoff does not consider in detail how specific actions might be precisely defined with respect to a world model. The next two sections of this paper will introduce the temporal logic and then define the framework for defining events and actions. To be as precise as possible, I have remained within the notation of the first order predicate calculus• Once the various concepts are precisely defined, the next necessary step in this work is to define a computaUonally feasible representation and inference process, Some of this work has already been done. For example, a computational model of the temporal logic can be found in Allen [198.1]• Other areas axe currently under investigation. 7'7 /" The final section demonstrates the generality of the approach by analyzing the action of hiding a book from someone. In this study, various other important conceptual entities such as belief, intention, and causality are briefly discussed. Finally, a definition of.what it means to hide something is presented using these tools. 2. A Temporal l,ogie Before we can characterize events and actions, we need to specify a temporal logic. The logic described here is based on temporal intervals. Events that appear to refer to a point in time (i.e., finishing a race) are considered to be implicitly referring to another event's beginning or ending. Thus the only time points we will see will be the endpoints of intervals. The logic is a typed first order predicate calculus, in which the terms fall into the following three broad categories: - terms of type TIME-INTERVAL denodng time intervals; terms of type PROPERTY, denoting descriptions that can hold or not hold during a particular time; and terms corresponding to objects in the domain. There are a small number of predicates. One of the most important is HOLDS, which asserts that a property holds (i.e., is true) during a time interval..Thus HOLDS(#,O is true only if property p holds during t. As a subsequent axiom will state, this is intended to mean that p holds at every subinterval oft as well. There is no need to investigate the behavior of HOLDS fully here. but in Allen [forthcomingJ various functional forms are defined that can be used within the scope of a HOLDS predicate that correspond to logical connectives and quantifiers outside the scope of the HOLDS predicate. There is a basic set of mutually exclusive relations that can hold between temporal intervals. -Each of these is represented by a predicate in the logic..The most important are: DURING(tl, t2)--time interval tl is fully contained within 12, although they may coincide on their endpoints. BEFORE(tl,t2)--time interval t] is before interval 12, and they do not overlap in any way: OVERLAP(tl, t2)--interval tl starts before t2, and they overlap; MEETS(tl, t2)--interval tl is before interval 12, but there is no interval between them, i.e., tl ends where t2. starts. Given these predicates, there is a set of axioms defining their interrelations. For example, there are axioms dealing with the transitivity of the temporal relationships. Also, there is the axiom mentioned previously when the HOI,I)S predicate wa~ introduced: namely (A.]) IfOLDS(p.t) & DURING(tl.t) --) HOI,DS(p.tl) This gives us enough tools to define the notion of action in the next section. 3. Events and Actions In order to define the role that events and actions play in the logic, the logical form of sentences asserting that an event has occurred must be discussed. Once even~ have been defined, actions will be defined in terms of them. One suggestion for the logical form is to define for each c[,,~ of events a property such that the property HOI.I)S only if the event occurred. This can be discarded immediately as axiom (A.]) is inappropriate for events. If an event occurred over some time interval "['. it does not mean that the event also occurred over all subintervals of T. So we introduce a new type of object in the logic, namely events, and a new predicate OCCUlt. l),y representing events as objects in the logic, we have avoided the difficulties described in Davidson [1967]. Simply giving the logical form of an event is only a small part of the analysis. We must also define for each event the set of conditions that constitute its occurrence. As mentioned in the introduction, there seems to be no restriction on what kind of conditions can he used to define an event except that they must partially describe the world over some time interval. For example, the event "the ball moving from x to y" could be modeled by a predicate MOVE with four arguments: the object, the source, the goal location, and the move event itself. Thus, MOVI'(IlalL x. y. m) asserts that m is an event consisting of the ball moving from x to y. We assert that this event occurred over time t by adding the assertion OCCUR(,~ t). With these details out of the way. we can now define necessary and sufficient conditions for the event's occurrence. For this simple class of move events, we need an axiom such as: (forall object, source, goaLt, e) MOVl'(object.source.goal.e) & OCCUR(~t) (--) (exists tl.t2) OVERLAPS(tl, t) & OVERLAPS(t.t2) & BF.FORE(tl.t2) & H O LD S(at(object.source). t l ) & HOLDS(at(object, goal), t 2 ) A simple class of events consists of those that occur only if some property remains constant over a particular interval (c£ Jackendoffs STAY verbs). For example, we may assert in l'nglish "The ball was in the room during T.'" "The ball remained in the room during T." 78 t" While these appear to be logically equivalent, they may have very different consequences in a conversation. This formalism supports this difference. The former sentence asserts a proposition, and hence is of the form H O L D S(in( BalI, R oom), T) while the latter sentence describes an event, and hence is of the form REMAIN-IN(Bail, Room, e) & OCCURS(e T). We may capture the logical equivalence of the two with the axiom: O'orall b.r,e,O REMAIN-IN(b,r,e) & OCCUR(nO (=) HOL1)S(in(b.r),O, The problem remains as to how the differences between these logically equivalent formulas arise in context. One possible difference is that the second may lead the reader to believe that it easily might not have been the case. Actions are events that involve an agent in one of two ways. The agent may cause the event or may allow the event (cf. [Jackendoff, 1976]). Corresponding to these two types of agency, there are two predicates, ACAUSE and ALLOW, that take an agent, an event, and an action as arguments. Thus the assertion corresponding to "John moved 13 from S to G" is MO VE(B, G,S, el) & ACA USE(Joh~ el.a1) & OCCUR(al.t) The axiomadzation for ACAUSE and ALLOW is tricky, but Jackendoff provides a reasonable starting set. In this paper, I shall only consider agency by causation further. The most important axiom about causality is (A.2) (forall a,e, act.O ACAUSE(a,e.acO & OCCUR(act, t) => OCCUR(cO For our purposes, one of the most important facts about the ACAUSE relation is that it suggests the possibility of intentionality on the part of the agent. This will be discussed in the next section. Note that in this formalism composition of events and actions is trivial. For example, we can define an action composition function together which produces an action or event that consists of two actions or events occuring simultaneously as follows: (A.3) (forall a,b.t) OCCURS(together(o,b).t) (=) OCCURS(c~O & OCCURS(b.t) 4. What's Necessary to Hide? The remainder of this paper applies the above formalism to the analysis of the action of hiding a book from someone. Along the way, we shall need to introduce some new representational tools for the notions of belief, intention, and causality, The definition of hiding a book should be independent of any method by which the action was performed, for, depending on the context, the actor could hide a book in many different ways. For instance, the actor could - put the book behind a desk, - stand between the book and the other agent while they are in the same room, or - call a friend Y and get her or him to do one of the above. Furthermore, the actor might hide ).he book by simply not doing something s/he intended to do. I:or example, assume Sam is planning to go to lunch with Carole after picking Carole up at Carole's office, if, on the way out of Sam's office, Sam decides not to take his coat because he doesn't want Carole to see it, then Sam has hidden the coat from Carole. Of course, it is crucial here that Sam believed that he normally would have taken the coat. Sam couldn't have hidden his coat by forgetting to bring it. This example brings up a few key points that may not be noticed from the first three examples. First' Sam must have intended to hide the coat. Without this intention (i.e., in the forgetting case), no such action occurs. Second, Sam must have believed that it was likely that Carole would see the coat in the future course of events. Finally, Sam must have acted in such a way that he then believed that Carole would not see the coat in the future course of events. Of course, in this case, the action Sam performed was "not bringing the coat," which would normally not be considered an action unless it was intentionally not done. I claim that these three conditions provide a reasonably accurate definition of what it means to hide something. They certainly cover the four examples presented above. As stated previously, however, the definition is rather unsatisfactory, as many extremely difficult concepts, such as belief and intention, were thrown about casually. There is much recent work on models of belief (e.g., [Cohen, 1978; Moore, 1979; Perils, 1981" Haas, 1981]). l have little to add to these efforts, so the reader may assume his or her favorite model. I will assume that belief is a modal operator and is described by a set of axioms along the [iu~ of Hintikka [I962]. The one important thing to notice, though, is that there are two relevant time indices to each belief; namely, the time over which the belief is held, and the time over which the proposition that is believed holds. For example. I might believe ~oda.v that it rained last weekend. This point wiil be crucial in modeling the action of hiding. To introduce some notation, let "A believes (during To) that p holds (during Tp)" be expressed as H O LDS(believes(A. holde(p. Tp)), Tb). 79 The notion of intention is much less understood than the notion of belief. However, let us approximate the statement "A intends (during Ti) that action a happen (during Ta)" by and "A believes (during Ti)that a happen (during Ta)" "A wants (during Ti) that a happen (during Ta)" This is obviously not a philosophically adequate definiuon (e.g., see [Searle, 1980]), but seems sufficient for our present purposes. The notion of wanting indicates that the actor finds the action desirable given the alternatives. This notion appears impossible to axiomatize as wants do not appear to be rational (e.g. Hare []97]]). However, by adding the belief that the action will occur into the notion of intention, we ensure that intentions must be at least as consistent as beliefs. Actions may be performed intentionally or unintentionally. For example, consider the action of breaking a window. Inferring intentionality from observed action is a crucial ability needed in order to communicate and cooperate with other agents. While it is difficult to express a logical connection between action and intention, one can identify pragmatic or plausible inferences that can be used in a computational model (see [Allen, 1979]). With these tools, we can attempt a more precise definition of hiding. The time intervals that will be required are: Th--the time of the hiding event; Ts--the time that Y is expected to see the book; Tbl--the time when X believes Y will see the book during "l's, which must be BEFORE "l'h; Tb3--the time when X believes Y will not see the book during Ts, which must be BEI"ORE or DURING Th and AI"I'I'~R Tbl. We will now define the predicate H I D I.'(agent, observer, object, a~t) which asserts that act is an action of hiding. Since it describes an action, we have the simple axiom capturing agency: (forall agent, observer, obJect, act H I D l:'(agent, observer, object, act) =) (Exists e ACAUSE(agent, e, act))) l.et us also introduce an event predicate S E l:'(agent, object, e) which asserts that e is an event consisting of agent seeing the object. Now we can define HIDE as follows: (forall ag, obs, o.a. 77z, HIDl'.'(ag.obs, o,a) & OCCUR(aTh) =) (Extsts Ts.Tbl, Tb3,e) 1) HO LDS(intends(a& occur(a. Th)). Th) 2) HOLDS(believes(ag, occur(e.Ts)),Tbl) 3) H O LDS(betieveKa& ~occur(e, Ts)), 7"b3) where 4) SEE(obs, o,e) and the intervals Th, Ts, Tb], Tb3 are related as discussed above. Condition (4) defines e as a seeing event, and might also need to be within ag's beliefs. This definition is lacking part of our analysis; namely that there is no mention that the agent's beliefs changed because of something s/he did. We can assert that the agent believes (between Tbl and Tb3) he or she will do an action (between Tbl and Th) as follows: (existx" al, el, Tb2 5) ACAUSlf(a&el,aD 6) H O LDS(believes(ag, OCC UR(al, Tal)), Tb2) where 7"b1 ( Tb2 ( Tb3 and Tbl ( But this has not caused the change in (3) are true, asserting Tal ( Tit captured the notion that belief (6) belief from (2) to (3). Since (6) and a logical implication from (6) to (3) would have no force. It is essential that the belief (6) be a key-element in the reasoning that leads to belief (3). To capture this we must introduce a notion of causality. This notion differs from ACAUSE in many ways (e.g. see [Taylor, 1966]), but for us the major difference is that, unlike ACAUSE, it suggests no relation to intentionality. While ACAUSE relates an agent to an event, CAUSE relates events to events. The events in question here would be coming to the belief (6), which CAUSES coming to the belief (3). One can see that much of what it means to hide is captured by the above. In particular, the following can be extracted directly from the definition: - if you hide something, you intended to hide it, and thus can be held responsible for the action's consequences; - one cannot hide something if it were not possible that it could be seen, or if it were certain that it would be seen anyway; - one cannot hide something simply by changing one's mind about whether it will be seen. In addition, there ate many other possibilities related to the temporal order of events. For instance, you can't hide something by performing an action after ,,he hiding is supposed to be done. 8O Conclusion I have introduced a representation for events and actions that is based on an interval-based temporal logic. This model is sufficiently powerful to describe events and actions that involve change, as well as those that involve maintaining a state. In addition, the model readily allows the composition and modification of events and actions. In order to demonstrate the power of the model, the action of hiding was examined in detail. This forced the introduction of the notions of belief, intention, and causality. While this paper does not suggest any breakthroughs in representing these three concepts, it does suggest how they should interact with the notions of time, event, and action. At present, this action model is being extended so that reasoning about performing actions can be modeled. This work is along the lines described in [Goldman, 1970]. Acknowledgements The author wishes to thank Jerry Feldman, Alan l:risch, Margery I.ucas, and Dan I,',ussell for many enlightening comments on previous versions of this paper. This research was supported in part by the National Science.Foundation under Grant No. IST-80-]2418, and in part by the Office of Naval Research under Grant No. N00014-80-C-0197. References Allen, J.l:., "A General View of Action and Time," TR, Dept. Computer Science, U. Rochester, forthcoming. Allen, J.l'~., "A Plan-Based Approach to Speech Act Recognition," Ph.l). thesis, Dept. Computer Science, U. Toronto, I979. Allen, J.F., "Maintaining Knowledge about Temporal Intervals," '1'I~,86, Dept. Computer Science, U. I~ochester, January 1981. Brachman, R.J., "Taxonomy, Descriptions, and Individuals in Natural I.anguage Understanding," in Proc., 17th Annual Meeting of the Assoc'n. for Computational Linguistics, 33-37, UCSD, I,a Jolla. CA, August 1979. Bruce, B., "l~elief Systems and I.anguage Understanding," Report 2973, I]olt, Beranek & Newman, Inc., 1975. Cohen, P.R., "On Knowing What to Say: Planning Speech Acts," "FR ] 18, Dept. Computer Science, U. Toronto, 1978. l)avidson, D., "The Logical Form of Action Sentences," in N. Rescher (l:.d). 77,e Logic of Decision and Action. Pittsburgh, PA: U. Pittsburgh Press, 1967. F:ikes, R..E. and N.J. Nilsson, "STI~,II)S: A New Approach to the Application of Theorem Proving to Problem Solving," Arttficial Intelligence 2, 189-205, I971. Goldman, A. A 77retry of Human Actton. New Jersey: Princeton U. Press, 1970. Grosz, ILL, "Utterance and Objective: Issues in Natural Language Communication," in Proc., 6th IJCAI, I067- 1076, Tokyo, August 1979. Haas, A., "Sententialism and the I,ogic of l]elief and Action," Ph.l). thesis, Dept. Computer Science, U. R, ochester, expected 1981. Hare, R.M. "Wanting: Some Pitfalls," in Binkley, Bronaugh, and Morras (l'ds). Agent. Action, and Reason. Toronto: U. Toronto Press, 197l. Hintikka, J. Knowledge and Belief Ithaca, NY: Cornell U. Press, 1962. Jackendoff, R., "Toward an "Explanatory Semantic l~,epresentation," Linguistic "lnquiry 7, 1, 89-150, Winter 1976. Moore, R.C., "Reasoning about Knowledge and Action," Ph.D. thesis, Mlq', February 1979. Perils D., "Language, Computation, and Reality," Ph.D. thesis, Dept. Computer Science, U. Rochester, i981. Sacerdoti, E.D. A Structure for Plans and Behavior. New York: -Elsevier North-Holland, Inc., 1977. Schank, R. arid R. Abelson. Script~ Plan~ Goalx and Understanding. Hillsdale, NJ: Lawrence Erlbaum Associates, 1977. Schmidt, C.F., "Understanding Human Action," in Proc., Theoretical Issues in Natural Language Processing, Cambridge, MA, 1975. Searle, J.R., "The Intentionality of Intention and Action," Cognitive Science 4, l, 1980. Taylor, R. Action and Purpose. New Jersey: Prentice Hall, 1966, Wilensky, R., "Understanding Goal-Based Stories," Ph.D. thesis, Yale U., 1978. 81
1981
17
A Rule-based Conversation Participant Robert E. Frederking Computer Science Department, Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract The problem of modeling human understanding and generation of a coherent dialog is investigated by simulating a conversation participant. The rule-based system currently under development attempts to capture the intuitive concept of "topic" using data structures consisting of declarative representations of the subjects under discussion linked to the utterances and rules that generated them. Scripts, goal trees, and a semantic network are brought to bear by general, domain-independent conversational rules to understand and generate coherent topic transitions and specific output utterances. 1. Rules, topics, and utterances Numerous systems have been proposed to model human use of language in conversation (speech acts[l], MICS[3], Grosz [5]). They have attacked the problem from several different directions. Often an attempt has been made to develop some intersentential analog of syntax, despite the severe problems that grammar-oriented parsers have experienced. The program described in this paper avoids the use of such a grammar, using instead a model of the conversation's topics to provide the necessary connections between utterances. It is similar to the ELI parsing system, developed by Riesbeck and Schank [7], in that it uses relatively small, independent segments of code (or "rules") to decide how to respond to each utterance, given the context of the utterances that have already occurred. The program currently operates in the role of a graduate student discussing qualifier exams, although the rules and control structures are independent of the domain, and do not assume any a priori topic of discussion. The main goals of this project are: • To develop a small number of general rules that manipulate internal models of topics in order to produce a coherent conversation. • To develop a 'representation for these models of topics which will enable the rules to generate responses, control the flow of conversation, and maintain a history of the system's actions during the current conversation. This research was sponsored in part by the Defense Advanced Research Projects Agency (DOO), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-78-C- 1551. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. • To integrate information from a semantic network, scripts, dynamic goal trees, and the current conversation in order to allow intelligent action by the rules. The rule-based approach was chosen because it appears to work in a better and more natural way than syntactic pattern matching in the domain of single utterances, even though a grammatical structure can be clearly demonstrated there. If it is awkward to use a grammar for single-sentence analysis, why expect it to work in the larger domain of human discourse,, where there is no obviously demonstrable "syntactic" structure? in place of grammar productions, rules are used which can initiate and close topics, and form utterances based on the input, current topics, and long-term knowledge. This set of rules does not include any domain- specific inferences; instead, these are placed into the semantic network when the situations in which they apply are discussed. It is important to realize that a "topic" in the sense used in this paper is not the same thing as the concept of "focus" used in the anaphora and coreference disambiguation literature. There, the idea is to decide which part of a sentence is being focused on (the "topic" of the sentence), so that the system can determine which phrase will be referred to by any future anaphoric references (such as pronouns). In this paper, a topic is a concept, possibly encompassing more than the sentence itself, which is "brought to mind" when a person hears an utterance (the "topic" of a conversation). It is used to decide which utterances can be generated in response to the input utterance, something that the focus of a sentence (by itself) can not in general do. The topics need to be stored (as opposed to possibly generating them when needed) simply because a topic raised by an input utterance might not be addressed until a more interesting topic has been discussed. The data structure used to represent a topic is simply an object whose value is a Conceptual Dependency (or CD) [8] description of the topic, with pointers to rules, utterances, and other topics which are causally or temporally related to it, plus an indication of what conversational goal of the program this topic is intended to fulfill. The types of relations represented include: the rule (and any utterances involved) that resulted in the generation of the topic, any utterances generated from the topic, the topics generated before and after this one (if any), and the rule (and utterances) that resulted in the closing of this topic (if it has been closed). Utterances have a similar representation: a CD expression with pointers to the rules, topics, and other utterances to which they are related. This interconnected set of CD expressions is referred to as the topic-utterance graph, a small example of which (without CDs) is illustrated in Figure 1.1. The various pointers allow the program to remember what it has or has not done, and why. Some are used by rules that have already been implemented, while others are provided for rules not yet built (the current rules are described in sections 2.2 and 3). 83 UTTS t . U1 t . U2 / . U3 TOPICS e . 1"1 t . T2 t i . T3 t . T4 R3 -- Figu re 1 -1 : A topic-utterance graph 2. The computational model The system under implementation is, as the title says, a rule- based conversation participant. Since language was originally only spoken, and used primarily as an immediate communication device, it is not unreasonable to assume that the mental machinery we wish to model is designed primarily for use in an interactive fashion, such as in dialogue. Thus, it is more natural to model one interacting participant than to try to model an external observer's understanding of the whole interaction. 2.1. Control One of the nice properties of rule-based systems is that they tend to have simple control structures. In the conversation participant," the rule application routine is simply an initialization followed by a loop in which a CD expression is input, rules are tried until one produces a reply-wait signal, and the output CD is printed. A special token is output tO indicate that the conversation is over, causing an exit from the loop. One can view this part of the model as an input/output interface, connecting the data structures that the rules access with the outside world. Control decisions outside of the rules themselves are handled by the agenda structure and the interest-rating routine. An agenda is essentially a list of lists, with each of the sublists referred to as a "bucket". Each bucket holds the names of one or more rules. The actual firing of rules is not as simple as indicated in the above paragraph, in that all of the rules in a bucket are tested, and allowed to fire if their test clauses are true. After all the rules in a bucket have been tested, if any of them have produced a reply-wait signal, the "best" utterance is chosen for output by the interest-rating routine, and the main loop described above continues. If none have indicated a need to wait, the next bucket is then tried. Thus, the rules in the first bucket are always tried and have highest priority. Priority decreases on a bucket.by.bucket basis down to the last bucket. In a normal agenda, the act of firing is the same as what I am calling the reply-wait signal, but in this system there is an additional twist. It is necessary to have a way to produce two sentences in a row, not necessarily tightly related to each other (such as an interjection followed by a Question). Rather than trying to guarantee that all such sets of rules are in single buckets, the rules have been given the ability to fire, produce an utterance, cause it to be output immediately, and not have the agenda stopped, simply by indicating that a reply-wait is not needed. It is also possible for a rule to fire without producing either an utterance or a reply-wait, as is the case for rules that simply create topics, or to produce a list of utterances, which the interest-rater must then look through. The interest-rating routine determines which of the utterances produced by the rules in a bucket (and not immediately output) is the best, and so should be output. This is done by comparing the proposed utterance to our model of the goals of the speaker, the listener, and the person being discussed. Currently only the goals of the person being discussed are examined, but this will be extended to include the goals of the other two. The comparison involves looking through our model of his goal tree, giving an utterance a higher ranking for matching a more important goal. This is adjusted by a small amount to favor utterances which imply reaching a goal and to disfavor those which imply failing to reach it. Goal trees are stored in long-term memory (see next section). 2.2. Memories There are three main kinds of memory in this model: working memory, long.term memory, and rule memory. The data structures representing working memory include several global variables plus the topic-utterance graph. The topic- utterance graph has the general form of two doubly-linked lists, one consisting of all utterances input and output (in chronological order) and the other containing the topics (in the order they were generated), with various pointers indicating the relationships between individual topics and utterances. These were detailed in section 1. Long-term memory is represented as a semantic network [2]. Input utterances which are accepted as true, as well as their immediate inferences, are stored here. The typical semantic network concept has been extended somewhat to include two types of information not usually found there: goal trees and scripts. Goal trees [6, 3] are stored under individual tokens or classes (on the property GOALS) by name. They consist of several CD concepts linked together by SUBGOAL/SUPERGOAL links, with the top SUPERGOAL being the most important goal, and with importance decreasing with distance below the top of the goal tree. Goal trees represent the program's model of a person or organization's goals. Unlike an earlier conversation program [3], in this system they can be changed during the course of a conversation as the program gathers new information about the entities it already knows something about. For example, if the program knows that graduate students want to pass a particular test, and that Frank is a graduate student, and it hears that Frank passed the test, it will create an individual goal tree for Frank, and remove the - goal of passing that test. This is clone by the routine which stores CDs in the semantic network, whenever a goal is mentioned as the second clause of an inference rule that is being stored. If the rule is stored as true, the first clause of the implication is made a subgoal of the mentioned goal in the actor's goal tree. If the rule is negated, any subgoal matching the first clause is removed from the goal tree. 84 / r As for scripts [9], these are the model's episodic memory and are stored as tokens in the semantic network, under the class SCRIPT. Each one represents a detailed knowledge of some sequence of events (and states), and can contain instances of other scripts as events. The individual events are represented in CD, and are generally descriptions of steps in a commonly occuring routine, such as going to a restaurant or taking a train trip. In the current context, the main script deals with the various aspects of a graduate student taking a qualifier. There are parameters to a script, called "roles" • in this case, the student, the writers of the exam, the graders, etc. Each role has some required preconditions. For example, any writer must be a professor at this university. There are also postconditions, such as the fact that if the student passes the qual he/she has fulfilled that requirement for the Ph.D. and will be pleased. This post-condition is an example of a domain-dependent inference rule, which is stored in the semantic network when a situation from the domain is discussed. Finally, we have the rule memory. This is just the group of data objects whose names appear in the agenda. Unlike the other data objects, however, rules contain Lisp code, stored in two parts: the TEST and the ACTION. The TEST code is executed whenever the rule is being tried, and determines whether it fires or not. It is thus an indication of when this rule is applicable. (The conditions under which a rule is tried were given in the section on Control, section 2.1). The ACTION code is executed when the rule fires, and returns either a list of utterances (with an implied reply-wait), an utterance with an indication that no reply wait is necessary, or NIL, the standard Lisp symbol for "nothing". The rules can have side effects, such as creating a possible topic and then returning NIL. Although rules are connected into the topic-utterance graph, they are not really considered part of it, since they are a permanent part of the system, and contain Lisp code rather than CO expressions. 3. An example explained A sample of what the present version of the system can do will now be examined. It is written in MacLisp, with utterances input and output in CO. This assumes the existence of programs to map English to CO and CD to English, both of which have been previously done to a degree. The agenda currently contains six rules. The two in the highest priority bucket stop the conversation if the other person says "goodbye" or leaves (Rule3-3 and Rule3-4). They are there to test the control of the system, and will have to be made more sophisticated (i.e., they should try to keep up the conversation if important active topics remain). The three rules in the next bucket are the heart of the system at its current level of development. The first two raise topics to request missing information. The first (Rule1) asks about missing pre-conditions for a script instance, such as when someone who is not known to be a student takes a qualifier. The second (Rule2) asks about incompletely specified post- conditions, such as.the actual project that someone must do if they get a remedial. At this university, a remedial is a conditional pass, where the student must complete a project in the same area as the qual in order to complete this degree recluirement; there are four quals in the curriculum. The third rule in this bucket (Rule4) generates questions from topics that are open requests for information, and is illustrated in Figure 3-1. RULE4 TEST: (FOR-EACH TOPICS (AND (EQUAL 'REQINFO (GET X 'CPURPOSE)) (NULL (GET X 'CLOSEDBY)))) ACTION: (MAPCAN '(LAMBDA (X) (PROG (TMP) (RETURN (COND ((SETQ TMP (QUESTIONIZE (GET- HYPO ( EVAL X)))) (MAPCAN '(LAMBDA (Y) (COND (Y (LIST (UTTER Y (LIST X)))))) TMP)))))) TEST-RESULT). Test: Are there any topics which are requests for information which have not been answered? Action: Retrieve the hypothetical part, form all "necessary" questions, and offer them as utterances. Figure 3-1 : Rule4 The last bucket in the agenda simply has a rule which says "1 don't understand" in response to things that none of the previous rules generated a response to (RuleK). This serves as a safety net for the control structure, so it does not have to worry about what to do if no response is generated. Now let us look at how the program handles an actual conversation fragment. The program always begins by asking "What's new?", to which (this time) it gets the reply, "Frank got a remedial on his hardware qual." The CO form for this is shown in Figure 3-2 (the program currently assumes that the person it is talking to is a student it knows named John). The CD version is an instance of the qual script, with Frank, hardware, and a remedial being the taker, area, and result, respectively. U0002 ((< = > ($QUAL &AREA (=HARDWARE*) &TAKER ('FRANK') &RESULT ('REMEDIAL')))) (ISA ('UTTERANCE*) PERSON "JOHN" PRED UTrS) Figure 3-2." First input utterance When the rules examine this, five topics are raised, one due to the pre-condition that he has not passed the qual before (by Rule1), and four due to various partially specified post- conditions (by Rule2): • If Frank was confident, he will be unhappy. • If he was not confident, he will be content. • He has to do a project. We don't know what. • If he has completed his project, he might be able to graduate. The system only asks about things it does not know. In this case, it knows that Frank is a student, so it does not ask aJoout 85 that. As an example, the topic that asks whether he is content is illustrated in Figure 3-3. T0005 ((CON ((< = > ($QUAL &AREA ('HARDWARE') &TAKER ('FRANK') &RESULT ('REMEDIAL')))) LEADTO ((CON ((ACTOR ('FRANK') IS ('CONFIDENCE" VAL (> 0))) MOP ('NEG" "HYPO')) LEADTO ((ACTOR ('FRANK') IS ('HAPPINESS" VAL (0))))) MOP ('HYPO')))) (INITIATED (U0013) SUCC T0009 CPURPOSE REQINFO INITIATEDBY (RULE2 U0002) ISA ('TOPIC') PRED T0004) Figure 3-3: A sample topic in detail Along with raising these topics, the rules store the utterance and script post-inferences in the semantic network, under all the nodes mentioned in them. The following have been stored under Frank by this point: • Frank got a remedial on his hardware qual. • If he was confident, he'll be unhappy. • If he was not confident, he'll be content. • Passing the hardware clual will not contribute to his graduating. • He has a hardware project to do. • Finishing his hardware project will contribute to his graduating. While these were being stored, Frank's goal tree was altered. This occurred because two of the post-inferences are themselves inference rules that affect whether he will graduate, and graduating is already assumed to be a goal of any student. Thus when the first is stored, a new goal tree is created for Frank (since his interests were represented before by the Student goal tree), and the goal of passing the hardware clual is removed. When 'the second is stored, the goal of finishing the project is added below that of graduating on Frank's tree. These goal trees are illustrated in Figures 3-4 and 3-5. ((ACTOR ('STUDENT*) IS (*HAPPINESS" VAL (5)))) ~ Subgoal ((< = > ($GRAD &ACTOR ('STUDENT') &SCHOOL ("CMU°)))) ~ Subgoal ((< = > ($QUAL &TAKER ('STUDENT') &AREA ('HARDWARE') &RESULT ('PASSED=)))) Figure 3.4: A student's goal tree ((ACTOR ('FRANK') IS ('HAPPINESS" VAL (5)))) ~ Subgoal ((< = > ($GRAD &ACTOR (~'FRANK') &SCHOOL ('CMU')))) ~ Subgoal ((< = > ($PROJECT &STUDENT ('FRANK') &AREA ('HARDWARE') &RESULT ('COMPLETED'))) MOP ('HYPO') TIME (> "NOW')) Figure 3-5: Frank's new goal tree At this point, six utterances are generated by Rule4. They are given in Figure 3-6. Three are generated from the first topic, one iS generated from each of the next three topics, and none is generated from the last topic. The interest rating routine now compares these utterances to Frank's goals, and picks the most interesting one. Because of the new goal tree, the last three utterances match none of Frank's goals, and receive zero ratings. The first one matches his third goal in a neutral way, and receives a rating of 56 (an utterance receives 64 points for the top goal, minus 4 for each level below top, plus or minus one for positive/negative implications. These numbers are, of course, arbitrary, as long as ratings from different goals do not overlap). The second one matches his top goal in a neutral way, and receives 64. Finally, the third one matches his top goal in a negative way, and receives 63. Therefore, the second cluestion gets uttered, and ends uP with the links shown in Figure 3-7. The other generated utterances are discarded, possibly to be regenerated later, if their topics are still open. ((< = > ($PROJECT &STUDENT ('FRANK •) &AREA ('HARDWARE') &BODY ('?•)))) What project does he have to do? ((ACTOR ('FRANK') IS ('HAPPINESS" VAL (0))) MOO ('?')) Is he content?. ((ACTOR ('FRANK') IS ('HAPPINESS • VAL (-3))) MOD ('?')) IS he unhappy?. ((< = > ($QUAL &TAKER ('FRANK') &AREA ('HARDWARE'))) MOD ('?" "NEG')) Hadn't he taken it before? ((< = > ($QUAL &TAKER ('FRANK') &AREA (" HARDWARE ") &RESULT ( • CANCELLED'))) MOO ('?')) Had it been cancelled on him before? ((< = > ($QUAL &TAKER ('FRANK') &AREA ('HARDWARE') &RESULT ('FAILED'))) MOD ('?°)) Had he failed it before? Figu re 3.6: The six possible utterances generated 4. Other work, future work Two other approaches used in modelling conversation are task-oriented and speech acts based systems. Both of these methodologies have their merits, but neither attacks all the same aspects of the problem that this system does. Task- 86 U0013 ((ACTOR ('FRANK') IS (*HAPPINESS* VAL (0))) MOP (*?°)) (PRED UO002 ISA (*UTTERANCE*) PERSON "ME* INTEREST.REASON (GO006) INTEREST 64 INITIATEDBY (RULE4 TO005)) Figu re 3-7: System's response to first utterance oriented systems [5] operate in the context of some fixed task which both speakers are trying to accomplish. Because of this, they can infer the topics that are likely to be discussed from the semantic structure of the task. For example, a task. oriented system talking about qualifiers would use the knowledge of how to be a student in order tO talk about those things relevant to passing qualifiers (simulating a very studious student). It would not usually ask a question like "Is Frank content?.", because that does not matter from a practical point of view. Speech acts based systems (such as [1]) try to reason about the plans that the actors in the conversation are trying to execute, viewing each utterance as an operator on the environment. Consequently, they are concerned mostly about what people mean when they use indirect speech acts (such as using "It's cold in here" to say "Close the window") and are not as concerned about trying to say interesting things as this system is. Another way to took at the two kinds of systems is that speech acts systems reason about the actors' plans and assume fixed goals, whereas this system reasons primarily about their goals. As for related work, ELI (the language analyzer mentioned in section 1) and this system (when fully developed) could theoretically be merged into a single conversation system, with some rules working on mapping English into CD, and others using the CD to decide what responses to generate. In fact, there are situations in which one needs to make use of both kinds of information (such as when a phrase signals a topic shift: "On the other hand..."). One of the possible directions for future work is the incorporation and integration of a rule-based parser into the system, along with some form of rule-based English generation. Another related system, MICS [3], had research goals and a set of knowledge sources somewhat .similar to this system's, but it differed primarily in that it could not alter its goal trees during a conversation, nor did it have explicit data structures for representing topics (the selection of topics was built into the interpreter). The main results of this research so far have been the topic- utterance graph and dynamic goal trees. Although some way of holding the intersentential information was obviously needed, no precise form was postulated initially. The current structure was invented after working with an earlier set of rules to discover the most useful form the topics could take. Similarly, the idea that a changing view of someone else's goals should be used to control the course of the conversation arose during work on producing the interest- rating routine. The current system is, of course, by no means a complete model of human discourse. More rules need to be developed, and the current ones need to be refined. In addition to implementing more rules and incorporating a parser, possible areas for future work include replacing the interest-rater with a second agenda (containing interest- determining rules), changing scripts and testing whether the 8"7 rules are truly independent of the subject matter, trying to make the system work with several scripts at once (as SAM [4] does), and improving the semantic network to handle the well-known problems which may arise. [1] [2] [3] [4] [5] [6] [7] [8] [9] References Allen, J. F. and Perrault, C. R. Analyzing Intention in Utterances. Artificial/nteJ/igence 15(3]:143-178, December, 1980. Brachman, R. J. On the Epistemological Status of Semantic Networks. In Findler, N. V. (editor), Associative Networks: Representation and Use of Knowledge by Computers, chapter I in particular. Academic Press, New York, 1979. Carbonell, J. G. Subjective Understanding: Computer Mode/a of Belief Systems. PhD thesis, Yale University, January, 1979. Computer Science Research Report # 150. Cullingford, R. E. Script Application: Computer Understanding of Newspaper Stories. PhD thesis, Yale University, January, 1978. Computer Science Research Report # 116. Grosz, B.J. The Representation and use of Focus in Dialogue Understanding. Technical Report 151, Stanford Research Institute, July, 1977. Newell, A. and Simon, H. A. Human Problem Solving. Prentice Hall, Englewood Cliffs, N. J., 1972, chapter 8. Riesbeck, C. and Schank, R. C. Comprehension by Computer: Expectation Based Analysis of Sentences in Context. Technical Report 78, Department of Computer Science, Yale University, 1976. Schank, R. C. Conceptual Information Processing. North-Holland, 1975, chapter 3. Schank, R. C. and Abelson, R. Scripts. Plans, Goals and Understanding. Erlbaum, 1977, chapter 3.
1981
18
SEARCH AND INFERENCE STRATEGIES IN PRONOUN RESOLUTION : AN E~ERIMENTAL STUDY Kate Ehrlich Department of Psychology UnlversiCy of Massachusetts Amherst, ~ 01003 The qusstlun of how people resolve pronouns has the various factors combine. been of interest to language theorists for a long time because so much of what goes on when people find referents for pronouns seems to lie at the heart of comprehension. However, despite the relevance of pro- nouns for comprehension and language cheorT, the processes chat contribute to pronoun resolution have proved notoriously difficult Co pin down. Part of the difficulty arises from the wide range of fac=ors that can affect which antecedent noun phrase in a tex~ is usderstood to be co-referentlal with a particular pronoun. These factors can range from simple number/gender agreement through selectional rescrlc~ions co quite complex "knowledge chat has been acquired from the CaxC (see Webber, (1978) for a neatly illustrated description of many of these factors). Research in psychology, artificial intelligence a~d linguistics has gone a long way toward identifying some of these factors and their role in pronoun resolu~ion. For instance, in psychology, research carried ouC by Caramazza =-d his colleagues (Caramazza et el, 1977) as well as research chat I have dune (Ehrllch, 1980), has demuns~rated that number/sender agreement really c=- fumcciun to constrain the choice of referent in a way Chat signiflcantly facilltaCes processing. Within an AI framework, there has been some very interesting work carried out by Sidner (1977) m~d Grosz (1977) thac seeks to identify the current topic of a Cex1: and co show Chat knowledge of the topic can considerably sillily pronoun reso- lutlon. It is important that people are able co select appropriate referents for pronouns and co have some basis for that decision. The research discussed so far has mentioned some of the factors Chac contribute co chose decisiuns. However, part of ~he problem of really understanding how people resolve pronouns is knowing how Certainly it is important a~d useful to polnc to a particular factor as concri- butlng to a reference decision, but in many texts more than one of these factors will be available to a reader or listener. One problem for the theorist is then to explaln which factor predominates in the decision as well as to describe the scheduling of evaluaclon pro- cedures. If it could be shown that there was a stricc ordering in which tests were applied, say, number/gender agreement followed by selectionai restrictions followed by inference procedures, pronoun resoluclon may be simp- ler to explain. At our present level of knowledge it is dlfficulc to discern ordering principles chat have any degree of generality. For Instance, for every example where the topic seems to determine choice, a sinLilar example c~- often be found where the more recent ante- cedent is preferred over the one that forms part of the topic. Moreover, even this claim begs the quesclon of how the coplc can be identified unambiguously. A different approach is possible. The process of assigning a referent Co a pronoun c~m be viewed as utilizing two kinds of strategies. One strategy is con- cerned with selecting the best referent from amongst the candidates available. The ocher strategy is concerned with searching through memory for the candidates. These two types of strategy, which will be referred to msem¢-lically as inference and search strategies, have different kinds of characteristics. A search strategy dictates the order in which candldaces are evaluated, but has no machinery for carrying out the evaluation. The inference strategy helps to set up the represen- taclon of the information in the cexC agains c which can- dldacas can be evaluated, but has ~o way of finding the c~aldidates. ~n the rest of this paper, she way these straCegles ~ighc interact will be explored and the results of two studies will be reported that bear on 89 the issues. One possible search strategy is ~o examine can- didates serially beginning with the one menKioned most recently and working back through the text. This strategy makes some sense because, as Hobbs (1978) has pointed out, most pronouns co-refer with antecedents Chat were menr.laned within the last few senuences. Thus, a serial search s~rategy provides a principled way of rescric~Lng how a text is searched. Moreover, there is some evidence fro~ psychological research ~hat it takes longer to resolve pronouns when the antecedent wlch which the pronotn~ co-refers is far rather than near the pronoun (e.g. Clark & $engul, 1979; SprlnEston, 1975). Although such distance effects have been used to argue for differences in memory reErieval, wlCh the nearer antecedents bein 8 easier to retrieve Ch~ the further ones, none of the reported data rule out a serial search strategy. AS argued earlier, a search s~rar~Ey alone cannot aecoun~ for pronoun resoluLian because it lacks any machinery for evaluation. There are, however, many kinds of informa~io~ tha~ people ~ bring to bear when evaluating c~dida~es and some of these were discussed earlier. A c~on method is to decide between alder- native candidates on ~he basis of information gained through inferences. Inference is a rather u~iqui~ous and often ill-deflned no~ion, and, although it is beyond the scope of this paper to clarify the concept, it is worth no~ing ~hat Chore are (at leas~) ~wo kinds of inference chat play a role in anaphora generally. One kind which T will call 'lexlcal' inferences are. drawn to establish Chat t~o different linguls~ic expressions refer ~o ~he same entity. For insnance, in the follow- ing pair of sentences from Garrod and Sanford (1977): (I) A bus came roaring round the corner The vehicle nearly flattened a pedes~rlan a 'lexlcal' inference esuabllshes that ~he particular vehicle mentluned in ~he second sentence is in fact a bus. Tnferences can also be drawn to support the selection of one referent over another. In a sentence such as : (2) John sold a car to Fred because he needed it a series of inferences based in part an out knowledge of selling a~d needing, supports ~he selection of Fred rather ~h=m John as referent for the pronoun "he". In the experiments to be reported, it was 'lexical' inferences ra~her ~han the oCher kind that were mani- pulated. Subjects in ~he experiment were asked to read texts such as the a~e given below: (3) Fred was outside all day John was inside all day a) He had a sleep inside after lunch b) He had a sleep in his room after lunch and then immedla~ely after, answer a question such as '~dho had a sleep after lunchY" Chat was designed to elicit the referent of the pranou~ in ~he las~ sentence. Two factors were independently varied. The antecedent could be near or far from the pronoun, ~he lacier affected by switching the order of the first £wo sen- ~ences. The second factor was whether a 'bridg~Ing' inference had to be drw~n ~o es~chllsh co-reference bed, sen part of the predlca~e of the lasc sentence and ~he target sentence. The ~o versions, (a) no inference and (b) inference, are shown as alternative ~hird sen- canoes in example (3) -hove. The principal measures were ~he Lime to answer ~he question and ~he accuracy of ~he respunse. The experi-~ent addresses ~wo critical issues. One is whether ~he 'lewical' inference is drEdn as part of the evaluaLion procedure, or, whether it is drawn in- dependently of Cha~ process. The o~her issue concerns the search sura~eEy itself: do subjects examine can- dlda~es serially, and, if so, do they s~ill use oCher criteria to reject the first canal/dace and choose the second? Two dlstincc models of processing can be con- s~rucced from a conslderarion of Chess issues. In the case where inferences are triggered by the need ~o 9O evaluate a candidate, any effect due to extra processing should be unaffected by whether the antecedent ks near or far from the pronoun. In either case the inference will be drawn in response to r/Re need to decide on the acceptability of the candidate. In the second model, the inference is triggered by the anaphoric expression, e.g. "in his room" An the third sentence, and the need to relate chat expression to the location "inside" men- tioned in a previous sentence. The inference is ex- pected to take a certain amotmt of time to be drawn (cf. Kintsch, 1974). According to the second model, one would expect that in cases where the antecedent is near the pronoun, there will be some effect due to inference because the process may not be completed in time to answer the question. When the antecedent is far from the pronoun, however, the inference process will be completed and hence no effect of inference should still be detected. The two models assume rationality on the part of the subjects; that is, they assume that subjects will accurately select the further antecedent where appropriate even though recency would predict selecr.lon of the first candidate that is evaluated. If this assumption ks valid, subjects should select the far antecedent where appropriate mere often than the (erroneous) near candidate. The results of the experiment, shown An Table 1, support the second model; ' lexlcal' inferences are drawn only once and in response to an anaphoric expres- sion. The data also provide evidence of a serial search strategy by showing that there are more errors and longer latencles associated with far rather than near antecedents. The data further show that even when the correct choice is far from the pronoun, subjects will choose it in preference to ~he nearer condidate, thus demonstrating that a serial search strategy alone can- not predict the choice of referent. The inferences that subjects had to draw in this experiment concerned simple lexlcal relations. The increase in latency due to having drawn such an infer- ence supports the resul~s of earlier studies, par- tlcularly those of Garrod and Sanford (1977). Whac the present study fails to do, however, is to determine whether that inference ks drawn spontaneously, while reading. Previous research (e.g., ~intsch, 1974, Garrod ald Sanford, 1977) has shown ~hat inferences are more likely to be drawn while reading ~han at a response stage. It was thus of some interest to know when ~he lexical inferences in ~he present study were drawn. This issue was examined by modifying the previous ex- periment to include both an additional measure of read- ing time and a 1.5s delay between presentation and test. The latter modification is important since if subjects are drawing inferences while reading, ~he process may not be completed by the time the question is asked i~mnedlately after presentation. The introduction of a delay also allows for a further test of the two pro- ceasing modeled outlined earlier. If indeed 'lexlcal' inferences are drawn to establish co-reference between anaphoric expressions rather than to determine pro- nominal reference, as the previous experiment indicated, then there should be an effect of inference on reading ~ime but not at response when there is a delay, because by response ~he inference should have been dr~m. The data were consistent with this hypothesis. However, what also emerged from the second study was that only some of ~he passages seemed to elicit inferences at reading; the number of passages was increased in the second experiment ro corn%tar possible repetition effects. In fact, for half the passages subjects res- ponded by saying there was no answer. An example of such a passage is given below: (4) Jill had a newspaper in the living-room Ann had a book in the living-room She read some chemistry An the evening It was also the case for these passages that the in- ferences did not seem to be drawn while reading but rather in response to the question. There is some doubt here about cause and effect, nevertheless, the 91 observation raises some in~eresclng questions con- cerning wha~ triggers an inference to be drawn. One answer, supplied by Garrod & Sanford in ~heir experi- ment.s, is thac a relation baleen e~cpressioas muse someh~ be perceived before an inference is drawn to de~e~-mlne ~e nature of ~he relation. I~n o~her words, people do not draw inferences randomly to relate lln- 8uisuic expressions. Thus, whereas Garrod & $anford found ~ha~ subjects would infer co-reference between "bus" and "vehicle" in exa~le (i), they failed to make that connection, qui~ rightly, in a slnuLlar passage shown below: (5) A bus came roaring round the corner It nearly smashed some vehicles What kinds of strategies do readers adop~ when they search ~heir memory to find plausible referents for pronouns? Resul~s of che experiments reported here point ~o a strategy in which an~ities are examined serially from ~he pronoun. The purpose of a serial search strategy is to provide a principled we7 in which readers can ex"rn'Ine ~ho~e entities they have stored in mmory, for ~heir appropriateness as ~he referent of a particular prono ~-~. The strategy is ~hus unnecessary when there is only one emr/~y in memory by vlr~ue of sim~le criteria such as humor and gender agreement wi~h ~he pronoun. What cons~.Itutes 'simple' criteria is, of course, an open question; che answer, however, will materially affect ~he applicability of ~he search s~rategy. The ~t important part of reference resolution is, however, deciding on the referent. A serial search strategy has no machinery for evaluating candidates, i~ can only direct ~he order in which candidates are examined. The process of selecting a plausible referent depends on ~he inferences a reader has drawn while ~he ~ext is read. Thus, when subjects found i~ hard ~o selec~ a referent at all ~hey also failed to draw m~my inferences while ~hey read ~he ~ext. Moreover, because ~he inferences for ~hese passa8es did seem to be drawn in response to a question ellci~Ing ~he referent, ~he i,~llcarAon is that inferences for che clearer material are generally drawn spontaneously and before a specific need for ~he informar.lon arises. One can conjecture from ~hese data that the select_ion of plausible refer- an~s is dependent on how well a reader has understood ~he preceding text. If inferences are not drawn on~il a specific need arises, such as finding a referent, ~hen it may be too late, to selec~ a referent easily or accurately, l~us, reference can also be viewed in terms of what a ~ext makes available for anaphoric reference (cf. Webber, 1978). The picture of pronoun resolution that emerges from the studies reported here, is one in which effects of distance between the pronoun and its antecedent may play some role, not as a predicator of pronominal reference as has often been ~houEht, but as part of a search strateEy. There certainly are cases where nearer antecedents seem to be preferred over ones further back in the text; however, it is more profitable to look ~o concepts such as foregroundin E (of. Chafe, 1974) rather than silnple recency for explanations of the preference. • It is also of some interest to have shown that infer- ences ~my con~rlbute ~o pronoun resolution huc drawn for other reasons. R~KENCES Carama~za, A., Grober, E., Garvey, C. and Yates, J. (1977). Comprehension of anaphoric pronom~s. Journal of Verbal Learning and Verbal Behavior, i_6, 601-9. ~fe, W.L. (1974). Language and consciousness. Lan__- guage, 50, 111-133. Clark, H.H., and Sengul, C.J. (1979). In search of re- ferents for nouns and pronouns. ~.emory and Cog- hi=ion, 7, 35-41. Ehrlich, K. (1980). Comprehension of pronouns. Ouar- terlv Journal of Exper~nental PsTcholo~, 32, 247- Garrod, S. and Sanford,A.J. (1977). Interpreclng ana- 92 photic relations: =he integration cf semantic information while reading. Journal of Verbal Learnin~ and Verbal Behavior, 16, 77-90. Grosz, B.J. (1977). The representation and use of focus in a system for understanding dialogs. In Proceedin~ of =he Fifth International Joint Con- ference on Artificial Intelligence. Cambridge: MIT. Hobbs, J.R. (1978). Resolving pronoun references. Lingua, 44, 311-338. Kintsch, W. (1974). The representation of meaning in memory. Potomac, Md: Erlbatnn. Sidner, C. (1977). Levels of ccmplexlty in discourse for anaphora disambiguatlon and speech act inter- pretation. In Proceedings of =he Fifth Inter- national Joint Conference cn Artificial Intel- li~ence. Cambridge: ~flT. Springsron, F.J. (1975). Some cognitive aspects of presupposed coreferential anaphora. Unpublished doctoral dissertation, Stanford University. Webber, B.L. (1978). A formal approach to discourse anaphora. 8BN report no. 3761. Cambridge, Mass: Bolt, Beranek and Newman, Inc. TABLE I Percent correct responses (?.C.) and mean response =~mes (R.T.). Inference condir ion Distance No inference Inference R.T. P.C. R.T. P.C. Near 1.32 95% 1.42 87% Far i .56 72% 1.56 70% 93
1981
19
COM PUTATIONAL ('Obl PLEXITY AND LEXICAL FUNCTIONAL GRAMMAR Robert C. Berwick MIT Artificial Intelligence Laboratory, Cambridge, MA 1. INTRODUCTION An important goal of ntodent linguistic theory is to characterize as narrowly as possible the class of natural !anguaooes. An adequate linguistic theory should be broad enough to cover observed variation iu human languages, and yet narrow enough to account for what might be dubbed "cognitive demands" -- among these, perhaps, the demands of lcarnability and pars,ability. If cognitive demands are to carry any real theoretical weight, then presumably a language may be a (theoretically) pos~ible human language, and yet be "inaccessible" because it is not leanmble or pa~able. Formal results along these lines have already been obtained for certain kinds of'rransformational Generative Grammars: for example, Peters and Ritchie [I] showed that Aspeel~-style unrest~ted transtbrmational grammars can generate any recursively cnumerablc set: while Rounds (2] [31 extended this work by demonstrating that modestly r~tricted transformational grammar~ (TGs) can generate languages whose recognition time is provhbly expm~cntial. (In Rounds" proof, transformatiocs are subject to a "terminal length non-decreasing" condition, as suggested by Peters and Myhill.) Thus, in the worst case TGs generate languages whose recognition is widely recognized to be computatiofrally intrdctable. Whether this "worst case" complexiw analysis has any real import for actual linguistic study has been the subject of ~me debate (for discussion, see Chomsky [4l; Berwiek and Weinbcrg [5]). Without resolving that cuntroversy here howeser, one thin-g- can be said: to make TGs cmciendy parsable one might provide con~train~ For instance, these additional s'~'ictutes could be roughly of the sort advocated in Marcus' work on patsinB [6] -- constraints specifying that TG-based languages must haw parsers that meet certain "lecality conditions". The Marcus' constraints apparently amount to an extension of Knuth's l.,R(k) locality condition [7] to a (restricted) version of a two-stack deterministic push-down automaton. (The need tbr LR(k)-like restrictions in order to ensure efficient processability was also recognized by Rounds [21.) Recently, a new theory of grammar has been advanced with the explictiy stated aim of meeting the dual demands of tearnability and pa~ability - the Lexical Functional Grammars (LFGs) of Bresnan [!~ I. The theory of l.exical Functional Grammars is claimed to have all the dc~riptive merits of transformational grammar, but none of its compotational unruliness, In t.FG, there are no transformations (as classically described); the work tbrmerly ascribed to transformations such as "passive" is shouldered by information stored in Ibxical entries associated with lexical items. The climmation of transformational power naturally gives rise to the hope that a lexically-based system would be computationally simpler than a transformational one. An interesting question then is to determine, as has already been done for the case of certain brands of transformational grammar, just what the "worg case" conlputational complexity for the recognition of LFG languages is. If the recognititm time complexiW for languages generated by the basic LFG rheas can be as complcx as that for languages generated by a modestly restricted U'ansfunnational system, then presumably [.FG will also have to add additional coastraiuts, beyond those provided in its basic theory, in order ',u ensure efficient parsability. The main result of this paper is to show that certain [.exical Functional Grammars can generate languages whose recognition time /s very likely ct~mput.'xtionally intractable, at Ie,'LSt a~urding to our current understanding of wl~at is or is not rapidly solvable. Briefly. the demonstration proceeds by showing how a problem that is widely conjectured to be cumputationally dimcult -- namely, whether there exists ~n ~%ignment of Us and O's (or '*T"s and "l~'s) to tire litcrals ofa Bta~lcan formula in conjunctive normal form that makes the forrnula evaluate to "I" (or "tree") -- can be re-expressed as the prublcm of recognizing whctl~er a particular string is or is uot a member uf the language generated by a certain lexical functional grammar. This "reduction" shows that in the worst case the recognitinn of I.FG lanp, uages can be just as hard as the original Boolean satisfiability problem. Since k is widcly conjectured that there cannot be a polynomial-time alguriti'n'n for satisfiabiliW (the problem is NP-complete), there canno~, be a polynomial-dine recognition algorithm for LFG's in general either. Note that this result sharpens that in Kaplan and Bresnan [81: there it is shown only that LFG's (weakly) generate some subset of the class of context-sensitive languages (including some strictly context-sensitive languages) and therefore, in the worst case, exponential time is known to be sufficient (though not necessary) to reaognize any LFG language. The result in [81 thus does not address the question of how much time, in the worst case, is necesmry to recognize LFG languages. The result of this paper indicates that in the worst case more than pnlynomial time will probably be necessary. (The reason for the hedlp." "probably" will become apparent below; it hinges upon the central unsolved conjecture of current complexity theory.) In short then, this result places the • LFG languages more precisely in the complexity hierarchy. It also toms out to be instructive to inquire into just why a lexically-based approach can tom out to be compurationally difficult, and how computational tractability may be guaranteed. Advocates of lexically-based theories may have thought (and some Pave explicitly stated) that the banishment of transformations is a compumdonally wise move because transformations are computationally "expensive." Eliminate the transformations, so this casual argument goes, and one has eliminated all comptitational problents. In~guingiy though, when one examines the proof to be given below, the computational work done by transformations in older theories re, emerges in the lexical grammar as the problem of choosing between alternative categorizations for lexical items - deciding, in a manner of speaking, whether a particular terminal item is a Noun or a Verb (as with the word k/ss in English). This power .of choice, coupled with an ability to express co-occurrence constraints over arbitrary distances across terminal tokens in a string (as in Subjeat-Verb number agreement) seems to be all that is required to make the recognition of LFG languages intr~table. The work doee by transformations has been exchanged for work done by lexieM ~.hemas. but the overall computational burden remains mugidy the same. This leaves the question posed in the opening paragraph: jug what sorts of constraints on natural languages are required in order to ensure efficient parsabil)tg? An infoqrln~ argume.nt can be made that Marcus' work [6} provides a good first attack on just this kind of characteriza~n. M~x:us' claim was that languages easily parsed {not "garden-pathed") by o¢oole could be precisely modeled by the languages easily pm'sed by a certain type of restricted, deterministic, two-stack parsing machine. But this machine can be spawn to be a (weak) non-canonical extension of the I,R(k) grammars, as proposed by Knuth [51. Finally, this paper will discuss the relevance of this technical result for more down-to-earth computational linguistics. As it turns out, even though 2eneral LFG's may well be computationally intractable, it is easy to imagine a variety of additional constraints for I..FG theory that provide a way to sidestep arovr,d the reduction argument. All of these additional r~trictions amount to making the LFG theory more restricted, in such a way that the reduction argument cannot be made to work. For example, one effective restriction is to stipulate that there can only be a finite stock of features with which to label Icxical items. In any case, the moral of the story is an unsurprising one: specificity and constraints can absolve a theory of computational intr~tability. What may be more surprising is that the requisite locality constraints seem to be useful for a variety of theories of grammar, from transformational grmnmar to lexieal functional gr,'unmar. 7 2. A REVIEW Ok" 131:DU,,eTI'ION ARGUMENTS The demonstration of the computational complexity of I.FGs rcii~ upon the standard complexity-theoretic technique of reduction. Becauso this method may be unf.',,ndiar to many readers, a short review is presented immediately below: this is followed by a sketch of the reduction proper. The idea behind the reduction technique is to take a difficult problem, in this case. the problem of determining the satisfiability of Boolean .rormu/as in conjunctive normal form (CNF), and show that the known problem can be quickly transfumled into the problem whns¢ complexity remains to be determined, in this case. the problem of deciding whether a given string is in the language generated by a given Lexical Functional Grammar. Before the reduction proper is reviewed, some definitional groundwork must be presented, A I]ooleanformula in cenjunctDe normal form is a conjunction of disjunctions. A formula is satisfiable just in case there exkts some assignment of T's and ['~s (or t's and 0's) to the Iiterals of the formula X i that fumes the evahmtion of the enure formula to be 1"; oLherwise~ the formula is said to be unsmisfiable. For cxmnpl¢ (X2VX3 VXT)A(XIV~2VX4)A(X3VXIVX 7 ) is satisfiable, since the assignment of Xz=T (hence ~'2= F'), X3= F (hence X3='l'). XT=F (.~./=T). XI=T (XI=F), and X4=F makes the whole formula cvalute to "T". The reductioo in the proof below uses a somewhat more restuictcd format where every term is comprised of the disjunction of exacdy three [itcrats, so-called 3-CNF(or "3-SAT"). "l'his restriction entails no loss of" gcncralit!,, (see Hopcmft and Ullman, [9]. Chapter 12), since this restricted furmat is also NP-complete. How does a reduction show that the LFG recognition problem must be at least .',s hard (computatiomdly speaking) as the original problem of Boolean satisfiability? Ihe answer is that any decision procedure for LFG recognition could be used as'a correspondingly f~st procedure for 3-CNF. as follows: (1) Given an instance of a 3-CNF problem (the question of whether there exists a satisl'ying assignment for a given luminia in 3-CNF), apply the transfi~mlational algurithm provided by the reduction: this algorithm is itself ~L%sumed tO execute quickly, in polynomial time or less. "]~e algurid'an outputs a corresponding LFG decision problem, namely: (i) a lexical functional grammar and (ii) a string to be tested lbr membership in the language generated by the I.FG. The LFG recognition problem r~presents or mimics the decision problem for 3-CNF in the sense that the "yes" and "no ~ answers to both ~dsfiability problem and membership problem must coincide (if there is a satisfying ag,;ignmenL then the corresponding LFG decision problem should give a "yeS" answer, etc.). (2) Solve the LFG decision problem -- the string-LFG pair - output by Step h if the string is in the LFG language, the original formula was satisfiable; if not. unsadsfiable. (Note that the grammar and string so constructed depend upon just what formula is under analysis; that is. For each different CNF formula, the procedure presented above outputs a diffemnt LFG grammar and suing combination. In the LFG case it is important to remcmber that "grammar" really means "grammar plus lexicon" - as one might expect in a lexically-based theory. S. Petet~ has observed that a siighdy different reduction allows one to keep most of the grammar fixed across all possible input formulas, constructing only different-sized lexicons for each different CN[: Formula; for details, see below.) To see how a reduction can tell us something about the "worst ca.~" time or space complexity required to recognize whether a string is or is not in an LFG language, suppose for example that the decision procedure for determining whether a string is in an LFG language takes polynomial time (that is, takes time n k on a deterministic "ruling machine, for some integer k, where n= the length of the input string). Then. since the composition of two polynomial algorithms can be readily shown to take only polynomial time (see [91 Chapter 12), the entire process sketched above, from input of the CHF formula to the decision about its satisfiability, will take only polynomial time. However, CNF (or 3-CNF) has no known polynomial time algorithm, and indeed, it is considered exceedi~zgly unlikely that one could exists. "Vaerefore, it is just as unJikely that LFG recognition could be done (in general) in polynomial time, The theory of computational complexity has a much more compact term for problems like CNF: CNF is NP-cnmolcte. This label is easily deciphered: (1) CNF is in the class NP. that is, the class or" languages that can be recognized by a .qD.n-deterministic Tunng machine in Dgivnomial time. (Hence the abbreviabon "NP", for "non-deterministic polynomial". To see that CNF ,', in the class NP, note that one can simply guess all possible combinations of truth assignments to iiterab, and check each guess in polynomial lune.) (2) CNF is complete, that is. all other languages in the class NP can be quickly reduced to some CNF formula, (Roughly. one shows that Boolean formulas can be used to "simuiam" any valid computation of a non-determinis~ Toting machine,) Since the class of problems solvable in polynomial time on a determinist~ Turing machine (conventionally notated. P) is trivially contained in the clas~ so solved by a nondcterministic Turing machine, the class P must be a subset ofdle class NP. A well-known, v, ell-studicd, and still open question is whther the class P is a nroner subset of the class NP. that is. whether there are problems solvable i.t non-deterministic polynomial time that cannot be solved in deterministic polynomial time.. Ik'causc all ofthe several thousand NP-eomplcte problems now catalogued have so far proved recalcitrant to deterministic polynomial time solution, it is widely held that P must indeed Ix a proper subsot of NP, and therefore that dte best possible algorithms for solving NP.complcte problems must take more than polynomial time (in general, the algorithms now known tbr such pmbtems inw~lve exponential combinatorial search, in one fashion or another; these are essentially methods' that do no Ixtter than to bnttally simulate -- deterministically, ofcout~e - a non-deterministic machine that "guesses" possible answeix) To repeat the Force of the reduction argument then, it" all LFG rec~ition problems were solvable in polynomial time. then the ability tu quickly reduce CNF Formulas to LFG recognition problems implies that all HP-complete problems would IX sulvabl¢ in polynomial rime. and that the class P=the class NP. This possibility seems extremely remote, tlence, our assumption that there is a fast (general) procedure for recognizing whether a string is or is not in the language generated by an arbitrary LFG grmnmar must be false. In the mrminology of complexity theory, LFG recognition must be NP-hard - "as hard as" any other NP problem, including the NP-complete problems. This means only that LFG recogntion is at least as haedas other NP-complcm problems -- it could still be more ditlicult (lie in some class that contains the class NP). If one could also show that the languages generated by LFC.s arc in the class NP, then LFGs would be shown to be NP-complcte. This pal~'r stops short of proving this last claim, but simply conjectures that LFGs are in the clasa NP. 3.A sg~c8 o ~ l g ~ To carry out this demonstration in detail one mug explicidy describe the t~nsformauon procedure that takes as input a formula in CHF and outputs a corresponding LFG decision problem - a string to be tested for membership in a LFG language and the LFG itself. One must also show that this can be done quickly, in a number of stc~ proportional to (at most) the lefigth of the original formula to some polyoomlal power, l~t us dispose of the last point first. The string to be tested for membership in the LFG language will simply be the original formula, sans parentheses and logical symbols; the LFG recognition problem is to lind a well-formed derivation of this string with respect to the grammar to be provided. Since the actual grammar and string one has to wrim down to "simulate" the CNF problem turn out to be no worse than linearly larger than the original formula` an upper bound of say. time n-cubed (where n=length of the original formula) is more than sufficient to construct a corresponding LFG; thus the reduction procedure itself can be done in polynomial time. as required. This paper will therefore have nothing fiarther to say about the time bound on the transformation procedure. 8 Some caveats are in order .before embarking on a proof sketch of this rednctio¢ First of all, the relevant details of the LFG theory will have to be covered on-the-fly; see [8] for more discussion.' Also, the grammar that is output by the reduction procedure will not look very much like a grammar for a natural language, ~ilthbugh the grammatical devices that will be employed will in every way be those that are an essential part uf the LFG theory. (namely, feature agreement, the lexical analog of Subject or Object "control", lexical ambiguity, and a garden variety context-free grammar.) In other words, although it is most unlikely that any namnd language would encode the satisfiability probl.cm (and hence be iutractablc) in just the manner oudined below, on the other hand. no "exotic" LFG machinery is used in the reduction. Indeed. some of the more powerful LFG notational formalisms -- long-distance binding existential and negative feature operators - have not been exploited. (An earlier proof made use of an existential operator in the feature machinery of LFG, but the reduction presented here does not.) To make good this demonstration one must set out just what the ~tisfiability problem is and what the decision problem for membership in an I..FG language is. Recall that a formula in conjunctive normal form is satisfiable just in case every conjunctive term evaluates to true, that is, at least one literal in each term is true. The satisfiability problem is to find an assignment of'I"s and Fs to the literals at the bottom (note that the comolcment of literals is also permitted) such that the root node at the top gets the value "T" (for li31g). How can we get a lexical functional grammar to represent this problem? What we want is for satisfying a.~ignments to correspond to to well-formed sentences of some corresponding LFG grammar, and non,satisfvint assignments to correspond to sentences that are not well-!'ormed, according to the LFG grammar:. satisftable non-satisfiable fo?la w form la|n~W sentence w' IS sente w" IS NOT in LFG language L(G) in LFG language L(G) Figure I. A Reduction Must Preserve Soludona to the Original Problem Since one wants the satisfying/non-satisfying assignments of any particular formula "to map over into well-formed/ill-formed sentences, one must obviously exploit the LFG machinery for capturing well-formedncm conditions for sentences, First of all, an LFG contains a base context-free m-ammar. A minimal condition for a sentence (considered as a string) to be in the language generated by a lexical-functional grammar is that it can be generated by this base grammar:, such a sentence is then said to have a well-formed constituent structure. For example, if the base roles included S=bNP VP; Vp=Pv NP, then (glossing over details of Noun Phrase rules) the sentence John kissed the baby would be well-formed but John the baby would not. Note that this assumes, as usual, the existence of a lexicon that provides a categorization for each terminal item, e.g., that baby is of the eategury N, k/xr, ed is a V, etc. Importantly then. this well-formedness cn/~dition requires us to provide at least one legitimate oarse tree for the candidate sentence that shows how it may be derived from the underlying LFG base context-free grammar. (There could be more than one legitimate tree if the underlying grammar is ambiguous.) Note further that the choice of categorization for a lexical item may be crucial. If baby was assumed to be of category V, then both sentences above would be ill-formed. A second major component of the LFG theory is the provision for adding a set of se-called functional equations to the base context-free rules. The~ equations ,are used to account for that the co-oecurrence restrictions that are so much a part of natural languages (e,g., Subject-Ve~ agreement). Roughly, one is allowed to associate featur~ with lexical entries and with the non-terminals of specified context-free rules; these features have values. The equation machinery is used to pass features in certain ways around the par,~ tree, and conflicting values for the same feature are cause for rejecting a candidate analysis. To take the Subject-Verb agreement example, consider the sentence the baby is kissing John. The lexical entry for baby (considered as a Noun) might have the Number feature, with the value sinzular. The lexieal entry for is might assert that the number feature of the %tbiect above it in the parse tree must have the value singular: meanwhile, the feature values for Subject are automatically found by another rule (associated with the Noun Phrase portion ofS=:,NP VP) that grabs whatever features it finds below the NP node and copies them up above to the S node. Thus the S node gets the Subject feature, with whatever value it has passed from baby below -- namely, the value sintadar: this accords with the dicates of the verb/s, and all is well. Similarly, in the sentence, the boys in the band is kissing John, bays passes up the number value olural, and this clashes with the verb's constraint; as a result this sentence is judged ill-formed: ,lqp•Tp,/jfeatures•¢ Subject Number.Singular or Plural? = CLASHI I Number.plural V *, Number:singular lJ the boys in the band is" kissing John. Figure 2. Co-eccurrence Restrictions are Enforced by Feature Checking in an LFG. It is important to note that the feature comparability check requires (1) a particular constituent structure trec (a pm~c tree); and (2) an assignment of terminal items (words) to lexical categories -- e.g., in the first Subject-Verb agreement example above, baby was assigned to be of the category N, a Noun. The tree is obviously required because the feature checking machinery propagates values according to the links specified by the derivation tree; the assignment of terminal items to categories is crucial because in most ca~ the values of features are derived from those listed in the lexical entry for an item (as the value of the numb~er feature was derived frtnn the lexical entry for the Noun form of bab~,). One and the same terminal item can have two distinct lexical entries, corresponding to distinct lexical categorizations; for example, baby can be both a Noun and a Verb. If we had picked baby to be a Verb, and hence had adupted ~hatevcr features are associated with the Verb entry for baby to be propagated up the tree, then the string that was previously well-formed, the baby is kissing John would now be considered deviant. If a string is ill-formed under all possible derivation trees and assignments of features From possible lexical categorizations, then that string is norin the language generated by the LFG. The possibility of multiple derivation trees and lexical categorizations (and hence multiple feature bundles) for one and the same terminal item plays a crucial role in the reduction proof: it is intended to capture the satisfiability problem of deciding whether to give a literal X i a value of"l" or "F". Finally, LFG also provides a way to express the familiar patterning of grammatical relations (e.g.. "Subject" and "Object") found in natural language. For example, transitive verl~ must have objects. This fact of life (expressed in an Aspects.style transformational grammar by subcategorization re~ictions) is captured in LFG by specifying a so-called ~ (for predicate) feature with a Verb: the PRED can describe what grammatical relations like "Subject" and "Object" must be filled in after feature passing has taken place in order for the analysis to be well-formed. For instance, a transitive verb like kiss might have the pattern, kiss((SubjeetXObject)), and thus demand that the Subject and Object (now considered to be "features") have some value in the final analysis. The values for Subject and Object might of course be provided from some other branch of the parse tree, as provided by the feature propagation machinery; for example, the Obiect feature could be filled in from the Noun Phrase part of the VP expansion: 'SUBJECT: Sue 1 S (eatures:lPRED !*kiss<(SubjeetXObjec0)l J V NP. sue / I km John Figure 3. Predicate Templates Can Demand That a Subject or Object be Filled In. But. if the Object were not filled in, thee die analysis is declared func#onally incomplele, and is ruled our. This device is used tO cast out sentences such as. t/m baby kL~eg $o much for the LFG machinery that is required for the reduction proo£ (There are additional capabilities in the LFG theory, such as long-distance binding, but these will nut be called upon in the demonstration below.) What then does the LFG repmsentador, of die satisfiabillty problem look like? Basically, there are three parts to the sausfiability problem that mug be mimicked by the LFG: (I) the assignment ofvaines to literals, e.g., X2-)'r"; X4-Y'F"; (2) the co-ordination of value assignments across intervening literals in the formula; e.g., the literal X 2 can appear in several different terms, but one is nut allowed to assign it the value "1" in one term and the value "F" in another (and the same goes for the complement of ~, literal: if X 2 has die value 'T'. "~z cannot have die valu~ "V'): and (3) ~tisfiability must corresl~md to LFG wcll-formedness, i.e. each term has the truth value "r" just in case at least one literal in the tenn is assigned "I" and all terms must evaluate to "l TM. Let us now go over how these components may be reproduced in an LFGo one by one. (t) Assignments: The input string to be tested for membership in the LFG will simply be the original formula, sans parentheses and logical symbols: the terminal items are thus just a string of Xi's. Recall that the job of checking the string for well-formedn,.-~s involves finding a derivation tree for the suing, solving the ancillary co-oecurrencc equations (by feature propagatiun), and chetking for functional completeness. Now, the cuntext-fre~ grammar constructed by the transformation procedure will be set up so ,'ts to generate a virtual copy of the associated formula, down to the point where literals X i are a~signed dicir values of'r" or "F". If the original CNF form had N terms. this part of grammar would look like: S~,T 1 T 2 .. T n (one "l" for each term) Ti=~Yi Yi Yk (one triple of Y's per term) Several comments are in order here. (I) The context-free base that is built depends upon the original CNF formula that is input, since the number of terms.' n, varies from formula to formula. In Stanley Peters' improved version of the reduction proof, the context-free base is fixed for all formulas with the rules: S='S S' S'==' T T TorSmT T ForT F ForT F Tot_ (remaining twelve expansions that have at least one "I" in each triple) The Peters grammar works by recursing until die right number of terms is generated (any sentences that are too long or too short cannot be matched to the input formula). Thus, the number of terms in the original CNF formula need not be explicidy encoded into the base grammar. (2) The subscripts Lj, and k depend on the actual subscripts in the original formula. (3) The Yi are not terminal items, but are non-terminals. (4) This grammar will have to be slightly modified in order for the reduction to work. ~ will become apparent shordy. Note that so far there are no rules to extend the parse tree down co the level of terminal items, the X r The next step does this and at the same time adds the power to choose between "r" and "F" assignments to literais. One includes in the context-free base grammar two productions deriving eacJa terminal item Xi, namely, XiT=~X i and XiF'mpX i, corresponding to an assgnment of -r" or "F" to the formula literal X i (it is important not to get confused here between the literais of the formula - these are terminal elements in the lexical functional grammar - and die literals of the grammar - the non-terminal" symbols.) One must also add, obviously, the rules Yi=~XiTlXi F, for each i, and rules corresponding to. the negations of variables, "~ir--'~i Note that these are not "exotic" t.FG rules: exacdy the same sort of rule is required in the baby case, i.e.. N~baby or V=~.baby, corresponding to whether baby is a Noun or a Verb. Now. the lexical entries for the "XiT " ' categ.rization of X i will look very different from the "XiF' eategodzadon of X i. just as one might expect the N and V forms for baby to be different. Here is what the entries for the two categorizations of X i look like: X~ XiT (Ttmth-assignment)=T (Tassign Xi)=T Xl: XiF (Tassign X i) =F The feature assignments for the negation of the literal X i is simply the dual of the entries above (since the sense of"T" and "I-" is reve~cd): ~" .~'iT (T truth-amsignment) = T (fa.~igu X.~: F. x,v :T The role of the additional "truth-ass/gnment" feature will be explained bdow. Figure 4. Sample Lexieal Entries to Reproduce the Ass/gument of T's and l'~s to a literal X r The upward-dirked arrows in the entries reflect the LFG re.mum propagation machinery. In the case of the X|T entry, for instance, they say to "make the Truth-assitnment feature of the node above XiT have the value "T =. and make the ~. pordon of the A~izn feature of the node above have the value T." This feature propagation device is what reproduces the assignment of T's and Fs to the CNF limrala, [f we have a triple of such eicmen~ and at least one of d~m is expanded out to XiT. then the restore pmpagauon machinery of LFG will merae the common feature names intu one large m~cture for the node above, reflecting the assignments made; moreover, the term ~ll get a tilled-in truth assignment value just in case at ~ag one of the expansions selected an XIT path: terminal suing: T' X i fPnmre s~rtlCtUr¢: i F i kF X X k t ruth'assignment= I Xj= L L::aJ Figure 5. The LFG Feature Pmpagatiun Machinery is Used to Percolate Feature Assigumants from the Lexicon. 10 (The features are passed transparendy through the intervening Yi nodes via the LFG ".copy" device. (T = J.); this simply means that all the features of the node below the node to which the "copy" up-add-down arrow'~ are attached are to be the same as those of the node above the up-and-down arrows.) It is p!ain that this mechanism mimics the a.~ignment of valueS~'.o literah required by the satisfiability problem. (2) Co-ordination of aasignments: One must also guarantee that the X i value assigned at one place in the tree is not contradicted by an X| or X i elsewhere. To ensure this, we use the LFG co-occurrence agreement machinery: the Assilzn feature-bundle is pass~ up from each term T i to the highest node in the parse tree (one simply adds the (i" = ]3 notadon to each T i rule in order to indicate this). The Assign feature at this node will thus contain the union of all ~ feature bundles passed up by all terms. If any X i values conflict, then the resulting structure is judged ill-formed. Thus, only compatible Xi assignments are well-formed: features: Assign: ~..i =T or F3.1 T~,.... ~ Clashl ~T X~T I {Tz~gn X~) = T (Tassign X~ = F) Figure 6. The Feature Comparability Machinery of LFG can Fon:e Assignments to be Co-ordinated Across Terms. (3) Prt.'servation of satisfying assignments. Finally, one has to reproduce the conjunctive chanlcter of the 3-CN F prublem -- that is, a sentence is ~atisfiahle (wcll-formcd) iff each term has at least one literal assigned the value "1".- Part of the disjunctive character of the problcm has already been encoded in the feature propagation machinery p~¢~nted so far: if at least one X i in a term "]'j cxpands to the Iexical entry XiT, then the tr~th-a~siRnment feature gets the value T. "['his is just as desired. Ifone, two, or three of the literais X i in a term select XiT, then Tl's truth-assigument feature is T. and the analysis is well-formed. But how do we rule out the case where all ~ree Xi's in a lerm select the "F' path. XiF? And how do we ensure that all terms have at least one T below them? Both of these problems can be solved by resorting to the LFG functional completeness constraint. The ~ck will be to add a Pred feature to a "dummy" node atu~ched to cach term; the sole purpose of this feature will be to refer to the feature "l'mth:a~,~i~,pm.q2.e=.g~ just as the predicate template for the transitive verb ki.~* mentions thc feature Object. Since an analysis is not wcll-formcd if the "grmnmatical relations" a Pred mentions are not filled in from somewhere, this will have the effect of forcing the Tmth-~i=nment t'cature to gct filled in every term. Since the "F" lexical entry does not have a l'mth-assimlmcnt value, if all the X i in a term triple select the XIF path (all the litcrais are "F") then no Truth-assignment feature is ever picked up from the lexicai entries, and that term never gets a Truth-assignment feature. This violates what the predicate template demands, and so the whole analysis is thrown out. (The ill-formednoss is ex~dy analogous to the case where a transitive verb never gets an ObjeCL) Since this condition is applied to each term, we have now guaranteed that each term must have at least one literal below it that ~clects the 'T" path -- just as desirea. Fo actually add the new predicate template, one simply adds a new (but dummy) branch to each term '1" v with the appropriate predicate constraint attached to it: / 11 T, featureJ:,.~ured: "dummy2<(TTruth-assignmen0~ Dum~ty2 r / ~ I / lexical entry: i I , ~. ....... ..... 'dummy2': J "~ XtT XtF ~"~vF : ,", ( I' r 'dummy2((1' Truth-assignment)> ~, ,X i| (TTruth-assignmen0 = T Figure 7. Predicates Can be Used to Force at least one ~ Per Term. There is a final mbde point here: one must prevent the Pred and Truth-assignment features for each term from being passed up to the head "S" node. The reason is that if these features were passed up, then since the LFG machinery automatically mergea the values of any features with the same name at the topmost node of the paine tree, the LFG machinery would fume the union of the feature values for Pred and Truth-asugnment over all terms in the analysis tree. The result would be that if any term had .at least one "I" {hence satisfying the Truth-assignment predicate template in at least one term), then the Pred and Truth-assignment would get filled in at the topmost node as well. The string below would be well-formed if at-least one- term were "T", and this would amount to a disjunction of disjunctions (an "OR" of "OR"s), not quite what is ~ugh¢. To eliminate this possibility, one must add a final trick: each term T I is given separate Predicate, Truth-assignment. and Assign features, but only the Assign feature is propagated to the highest node in the parse tree as such, In contrast, the Predicate and Truth-assignment features for each term are kept "protected" from merger by storing them under separate feature headings labelled T1...'r n. "l~e means by which just the ASSIGN feature bundle is lifted out is the LFG analogue of the natural language phenomenon of Subject or Object "control". whereby just the features of the Subject or Object of a lower clause are lifted out of the lower clause to become the Subject or Object of a matrix sentence; the remaining features stay unmergeable because they stay protected behind the individually labelled terms. To actu,'dly "implement" this in an LFG one can add two ncw branches to each Term expansion in the base context-free grammar, as well as two "conttul" equation specificatious that do the actual work of lifting the features from a lower clause to the matrix ~ntence: Natural language case (from [81, pp. 43-45): The girl persuaded the baby to go. (part of the) lexicai ena'y for perauaded: V (T VCOMPSubject)=(T OhjecO The notation (T VCOMP Subjec0=(T Object) - dubbed a "control equation" -- means that the features of the Object above the V(erb) node am to be the same t~ those of the features of the Subject of the verb complement (VCOMP). Hence the top-most node of the pa~e tree eventually has a feature bundle something like: ~'ubject: {bundle of features for NP subject "the gift"} predicate: 'persuadc<(T Subject)(T ObjectXTVcomp)>' 3bjecr [bundle of features for NP Object "the baby"} "\ COPIED /erb 3omplement: ~Subject: {bundle ~f features for NP subject "the baby"a~ "VCOMP") ~.Predicate: 'go((TSubject)>' ..J Note l:ow the Object features have been copied from the Subj~'t features of the Verb Complement, via the notation ~k..~cribed above, but the Predicate features of the Verb Complement were leR behind. The satisfiability analogue of this machinery is almost identical: Phrase structure U'ee: A f Ti"'~T COMP D U m ~ k One now attaches a "control equation" to the A i node that forces the Assi=n Feature bundle From the TiCOMP side co be lifted up to gct merged iuto the A.~si~n feature bundle of the T i node (and then, in turn, to become merged at the topmost node of the tree by the usual Full copy up-and-down arrows): (r TiCOMP Assign) = (TAssign) Note how this is just like the copying of the Subject Features of a Verb Complcmcnt into the Object position of a matrix clause. 4. REI EVANCE OF COMPI.EXITY RESUI.TS ,~N[') CONCLUSIONS Thc demons~ation of the previous section shows that LFGs have enough power to "simulate" a probably computationally intractable problem. But what are we to make of this result? On the positive side, a complexity resuR such as this one places the LFG theory more precisely in the hierarchy of complexity classes. Ifwe conjecture, as seems reasonable, that LFG language recognition is actually in the class NP (that is, LFG recognition can be done by a non-deterministic Turing machine in polynomial ~rne), then LFG language rccognitiun is NP-complete. (This conjecture seems reasonable because a non-determfnistic "luring machine should be able to "guess" all Feature propagation solutions using its non-deterministic power - including any "long-distance" binding solutions, an LFG device not discussed here. Since checking candidate solutions is quite rapid - it can be done in n 2 time or less, as described in [$] - r~ognition should be possible in polynomial time on such a machine.) Comparing this result to other known language claas~ note that context-sensitive language recognition is in the cia~ polynomial space ("PSPACE'). since (non-deterministic) linear bounded automata generate exactly the class of context-sensitive languages. (Non-deterministic and deterministic polynomial space classes collapse together, because of Savitch's wcll-known result [9] that any Function computable in non-dcterminL'~ic space N can be computed in demrmini,,,~ space N2.) Funhennore, the class NP is clearly a subset of PSP^CE (since if a function uses Space N, it must use at least Time N), and it is suspected, but not known for certain, that NP is a proper subset of PSPACE. (This being a Form of the P=NP question once again.) Our conclusion is that it is likely that LFG's generete a proper subset of the context-sensitive languages. (In [81 it is shown that this includes some strictly context-sensitive languages.) It is imeresting that several other "natural" extensions of the context-~ languages - notably, the class of languages generated by the so-called -mdexcd grammars" - also generam a subset of the conteat-sensitive languages, including those su'ictly context-sensitive languages shown to be generable hy LFGs in [8], but are provably NP-eomplete (soc [21 for proofs). Indeed. a cursory look at the power of the indexed grammars at least sugg~s that they might subsume the machinery of the LFG theory; this would be a good conjecture to check. On the other ~ide of d~e coin. how might one restrict [.FG theory further so az ~o avoid possible intractability? Several c~ape hau:hcs immediately come to mind; thc-ze will simply be listed here. Note that all of these "fixes" have the effect of adding additional consu'aints to t~rther restrict the LFG thcory, I. Rule out "worst case" languages as linguistically irrelevant. "['he probable computational inu'actability arises because co-occurrence restrictions (cumpatible a.~signment of Xi's) can be Fumed across arbitrary distances in the terminal string in conjunctioo with lexical ambiguity For each terminal itcm. [f some device can be Found in natural languages that filters out or removes such ambiguity locally (so that the choice of whether an item is "T" or "1 -~' never depends on other itcms arbitrarily far away in the terminal string), or if natural languages never employ such kinds of co-~currence restrictions, dlen the reduction is theoretically relevant, but linguistically irrelevant. Note that such a finding would be a positive discovcry, since one would be able to filnhcr r~trict the LFG theory in its 12 attempt to characterize all and only the natural languages. This di~"overy would be on a par with, for example, Petcrs and Ritchi¢'s observation ~hat although the context-sensitive phrase structure roles Formally advanced in linguistic theory have the power to generate non-context-Free languages, that power has apparendy never been used in immediate constituent analysis [11]. 2. Add "locality principlus" for recognition (or parsing). One could simply stipulate that LFG languages meet some condition known to ensure efficient recognizability, e.g, Knuth's [7] LR(k) restriction, suitably extended to the case of cuntext-sonsitive languages. (See [10] For more 3. Restrict the lexicon, The reduction depends crucially upon having a n infinite stock oflexieal items and an infinite number of Features with which co label them - several for each literal X r This is necessary because as CNF Formulas grow larger and larger, the number of Iiterals can grow arbitrarily large. If, For whatever reason, the stock of lexical items or feature labels is finite, then the reduction method must Fail after a certain point. -[-his restriction seems ad hoe in the case ofiexical items, but perhaps less so in dze case of Festures, (Speculating. perhaps features require "grounding" in terms of other language/cognitive sub-systems -- e.8,, a Feature might be required to be one of a finite number of primitive "basis" elements of a hypothetical conceptual or sensort-motor cognitive system.) ACKNOWI.ED~ F.MEN'TS [ would like to thank Run Kapian. Ray Perrault. Chrisms Pnpadimimou,and particularly Sc.,nloy Peters For various discussions about the contents of this paper. "This n:pon describes rescarctl done at the A~iticial Intelligence [aboratory of" U1c Massachusetts Institute of '['cchnology. Support For the Laboratory's artificial intelligeuce re,catch is provided in part by the Office of Naval gc~il~h under Office of Naval Res~treh contr-'t N00014-80_..-C-0508. ~ E[-'ERENCF.S Ill Peters, S. and Ri~hie` R. "On the generative power of ~.nsform~tional grammae~." hffonua¢ien Sciences 6, 1973, pp. 49-83. [2] Rounds, W. "Complexity of recognition in intermedia~.~.tevet languag¢~" Pmcucdings o( the 14th Ann. Syrup, on Switching Theory and Automat=, 19"/3. [31 Ih)unds W, "A grammatical charactertzadon of" exponential-dine languages," Proceedings of the 16th Ann. Syrup. on Switching "rheory ami Automata, 1975. pp. 135-143. [4] Chomsky, N. Rules and Representations New York: Columbia University Press, 1980. [5[ Befwick, R. and Weinberg, A. The Role of Grammars in Model~ of Language Use., unpublished Mrr report, forthcoming, 198L [6] Magus, M. A Theory of S~taedc Recognition for Natural Language, Cambridge, MA: MITPreas, 1980. [7.] Knuth, D. "On the translation of languages from left to right?, Information and Conm)i, 8, 1965, pp. 607-639. [8 ! Kaplan. R. and Bresuan. .[. Lexical-funclional Grommar: A Formal System for Grammatical Representation, Cambridge, MA: MIT Cognitive Science Occasional Paper # 13, 1981. (also Forthcoming in Bresnan, cal., The Men~l Rep~seatation of Grammatical Relations, Cambridge, MA: MIT Press, 1981 [9] HoperoR. J. and Ulhnan, J. Introduction to Automata Theory, Languages, and Computation, Reading, MA: Addison-Wesley, 1979. [10] Bcrwick, R. Locality Principles and the Acquisition of Syntactic Knowledge, MIT PhD. cUasenadon, 1981 forthcoming. [ll] Peters, S. and Ritchie` R. Context-~ensilive bnnwdime constituent asaal3~is: contexi-free languages revisiled~ Mathematical Systems Theory, 6:4, 1973, pp. 324-333.
1981
2
PERSPECTIVES ON PARSING ISSUES Jane J. Robinson, Chair Artificial Intelligence Center SRZ International Nowhere is the tension between the two areas of our field--computatlon and llnguistlcs--more apparent than in the issues that arise in connection with parsing natural language input. This panel addresses those issues from both computational and linguisric perspectives. Each panelist has submitted a position paper on some of the questions that appear below. The questions are loosely grouped in three sections. The first concentrates on the computational aspect, the second on the linguistic aspect, and the third on their interactions. A preliminary definition: For purposes of providing common ground or possibly a common point of departure at the outset, I will define parsln~ as the assigning of labelled syntactic structure to an input by applying a grammar that defines syntactically well-formed sentences and phrases. Note that the question of whether the grammar does other things as well is left open. In this sense, parsing is distinguished from interpretation, which may take many forms, such as assigning representations in an unambiguous formal language and integrating those representations into a data base or into a hearer's belief system. The questions: I. Th_.__eeComputational Perspective: What useful purposes, if any, are served by distinguishing parsing from interpretation? Is computational efficiency increased? Is system building made easier? Or is an insistence on parsing a hindrance? (Can we compute an interpretation better without assigning l&belled syntactic structures?) Computational linguists, using available computational equipment that is almost exclusively serial in design, have devised parsing algorithms that involve serial search. Yet it is obvious that many parts of the parsing process could be done in parallel. How might notions of parallel processing, VLSI, and the llke change our views on parsing? What might motivate our trying to make parsing procedures simulate human behavior, e.g., by intermixing syntactic with semantic and pragmatic processing? And for that matter, how do we know what human processing is like? Do our intuitions agree and are they to be trusted? 2. The Lin~uistlc Perspective: Have our tools (computers and formal grammars) warped our views of what human languages and human language processing may be like? What legitimate inferences about human linguistic competence and performance can we draw from our experiences with mechanical parsing of formal grammars? Our most efficient parsing algorithms are for context free (and even regular) grammars. Does this suggest that the core of grammars for natural languages is context free or even regular? 3. The Interactions: Why do we usually have one grammar and procedure for sentence recognition and another grammar and procedure for sentence generation? Do we need a different pair for each direction? What is the nature of the relationship between a grammar and a procedure for applying it? Are we influenced in the way we devise computational grammars by the algorithms we expect to apply to them? Can a grammar be psychologically valid (validated) independently of the parsing algorithm that works with it? Can a parsing algorithm be psychologically valid (validated) independently of ~he grammar? The discussion to follow: The position papers will serve to focus the discussion. That discussion may take the form of a debate about the best methods for language processing, bot it can also be viewed as gathering of diverse experiences with processing n:tural language. 95 /
1981
20
SOME I33UE3 IH P&RSING AHD NATURAL LINGUAGE UNDERSTANDING Robert J. Bobrow Bolt Beranek and ~ewman Inc. Bonnie L. Webber Department of Computer & Information Science University of Pennsylvania Lan&ua~e is a system for ancodln~ and trans~tttlnK ideas. A theory that seeks to explain llnKulstlc phenomena in terme of this fact is a fun~t~1 theory. One that does not • £sses the point. [10] PREAMBLE Our response to the questions posed to this panel is influenced by a number of beliefs (or biasesl) which we have developed in the course of building and analyzin~ the operation of several natural language understanding (NLU) systems. [I, 2, 3, 12] While the emphasis of the panel i~ on parslnK, we feel that the recovery of the syntactic structure of a natural lan~unKe utterance must be viewed as part of a larger process of reeoverlnK the meaning, intentions and goals underlying its generation. Hence it is inappropriate to consider designing or evaluatln~ natural language parsers or Erem,~ra without taking into account the architecture of the whole ~LU system of which they're a part. I This is the premise from which Our beliefs arise, beliefs which concern two thinks: o the distribution of various types of knowledge, in particular syntactic knowledge, amonK the modules of an NLU system o the information and control Flow emonK those modules. As to the first belief, in the HLU systems we have worked on, most syntactic information is localized in a "syntactic module", although that module does not produce a rallied data structure representing the syntactlo description of an utterance. Thus, if "parslnK" is taken as requlrln~ the production of such a rallied structure, then we do not believe in its necessity. However we do believe in the existence of a module which provides syntactic information to those other parts of the system whose decisions ride on it. As to the second belief, we feel that syntax, semantics and prattles effectively constitute parallel but interacting processors, and that information such as local syntactic relations is determined by Joint decisions -monk them. Our experience shows that with mlnir"al loss of efficiency, one can design these processors to interface cleanly with one another, so as to allow independent design, implementatlon and modification. We spell out these beliefs in slightly more detail below, and at greater length in [~]. 1We are not claiming that the only factors shaping a parser or a gr~-mar, beyond syntaotlo conslderatlofls, are thlrLKs llke meanlng, intention, etc. There are clearly mechanical and memory factors, aa well an laziness - a speoXer's penchant for trylnK to get away with the mdniEal level of effort needed to accomplish the taskf 97 The Comoutatiom~l Persneetive The first set of question~ to this panel concern the computational perspective, and the useful purposes served by distinguishing parsing from interpretation. We believe that syntactic knowledge plays an important role in NLU. In particular, we believe that there is a significant type of utterance description that can be determined on purely syntactic grounds 2, albeit not necessarily uniquely. This description can be used to guide semantic and discourse level structure recovery processes such as interpretation, anaphoric resolution, focus tracking, given/new distinctions, ellipsis resolution, etc. in a manner that is independent of the lexical and conceptual content of the utterance. There are several advantages to factoring out such knowledge from the re,~-~nder of the NLU system and prowlding a • syntactic module" whose interactions with the rest of the system provide information on the syntactic structure of an utterance. The first advantage is to simplify system building, an we know fl-om experience [I, 2, 3, 4, 5, 12]. Once the pattern of communication between processors is settled, it is easier to attach a new semnntlcs to the hooks already provided in the Kr~,mar than to build a new semantic processor. In addition, because each module ban only to consider a portion of the constraints implicit in the data (e.g. syntactic constraints, semantic constraints and discourse context), each module can be designed to optimize its own processing and provide an efficient system. The panel has also been charged wlth _ ~oslderlng paa'allel processing as a challenge to its views on parsing. Thls touches on our beliefs about the Interaction among the modules that comprise the HLU system. To respond to this issue, we first want to dlstlngulsh between two types of parallelism: one, in which many instances of the same thin6 are done at once ~ (an in an array of parallel adders-) and another, in which the many thinks done slmul~aneously can be different. Supporting this latter type of parallelism doesn*t change our view of parsing, but rather underlies it. We believe that the Interconnected processes involved in NLU must support a banjo o~eratinK pri~iple that Herman and Bobrow [14] have called "The Principle of Continually Available Output":, (CAO). This states that the Interactlng processes muat~ benin to provide output over a wide range of resource allocations, even before their analyses are complete, and even before all input data is available. We take this position for two rensons: one, it facilitates computational efficiency, and two, it seems to be closer to human parsing ~rocesses (a point which we will get to in answerlnK the next question). The requirement that syntactic analysis, semantic interpretation and discourse processlng must be able to operate in (pseudo-)parallel, obeying the CAO 2that is, solely on the baa£s of syntactic categories/features and ordering Information principle, has sparked our interest in the design of calrs of processes which can pass forward and backward unet~Ll In/ormatlon/advlce/questlons as soon as possible. The added potential for interaction of such processors can increase the capability and efficiency of the overall HLU process. Thus, for example, if the syntactic module makes its intermediate decisions available to semantics and~or pragmatlcs, then those processors can evaluate those decisions, guide syntax's future behavior and, in addition, develop in parallel their own analyses. Having sent on its latest assertlon/advlce/question, whether syntax then decides to continue on with something else or walt for a response will depend on the particular kind of message sent. Thus, the parsers and grammars that concern us are ones able to work with other appropriately designed compoconts to support CAO. While the equipment we are USing to implement and test our ideas is serial, we take very seriously the notion of parallelism. Finally under the heading of "Computational Perspective", we are anked about what might motivate our trying to make parsing procedures simulate what we suspect human parsing processes to be like. One motivation for us is the belief that natural language is so tuned to the part extraordinary, part banal cognitive capabilities of human beings that only by simulating human parsing processes can we cover all and only the language phenomena that we are called upon to process. A particular (extraordinary) aspect of hu~an cognitive (and hence, parsing) behavior that we want to explore and eventually simulate is people's ability to respond even under degraded data or resource limitations. There are examples of listeners initiating reasonable responses to an utterance even before the utterance is complete, and in some case even before a complete syntactic unit has been heard. Simultaneous translation is ode notable example [8], and another is provided by the performance of subjects in a verbally guided assembly task reported by P. Cohen [6]. Such an ability to produce output before all input data is available (or before enough processing resources have been made available to produce the best possible response) is what led Norman and Bobrow to formulate their CAO Principle. Our interest is in architectures for NLU systems which support CAO and in • search strategies through such architectures for an opti~"l interpretation. The LimnLiStlC ~rs~etlve We have been asked to comment on legitimate inferences about human linsulstic competence and performance that we can draw from our experiences with mechanical parsing of formal grammar. Our response is that whatever parsing is for natural languages, it is still only part of a larger process. Just because we know what parsing is in formal language systems, we do not secsssarily know what role it plays is in the context Of total communication. Simply put, formal notions of parsing underconstraln the goals of the syntactic component of an NLU system. Efficiency meanures, based on the resources required for generation of one or all complete parses for s sentence, without semantic or pra~e~-tlc Intera~tlon, do not secessarily specify desirable properties of a natural language syntactic analysis component. As for whether the efficiency of parsing algorlthm~ for CF or regular grammars suggest that the core of NL igremmars la CF or regular, we want to dlstlngulsh that part of perception (and hence, syntactic analysis) which groups the stimulus into recognizable units from that part which fills in gaps in in/ormatlon (inferentially) on the baals of such groups. Results in CF grammar theory says that grouping is not best dose purely bottom-up, that there are advantages to t ~ uslng predictive mechanlsms a~ well [9, 7]. Thls snggests two things for parsing natural language: I. There is a level of evidence and a process for using it that is worEing to suggest groups. 2. There is another filtering, inferenclng mechanism that maEes predictions and diagnoses on the basis of those groups. It is possible that the grouping mechanism may make use of strategies applicable to CF parsing, such as well- formed substrlng tables or charts, without requiring the overall language specification be CF. In our current RUS/PSI-ELONE system, grouping is a function of the syntactic module: its output consists of suggested groupings. These snggestlons may be at abstract, specific or disjunctive. For example, an abstract description m~ht be "this is the head of an NP, everything to its left is a pre-modifler". Here there is co comment about exactly how these pre-modlflers group. A disjunctive description would consist of an explicit enumeration of all the possibilities at some point (e.g., "this is either a time prepositional phrase (PP) or an agentive PP or a locative PP, etc."). Disjunctive descriptions allow us to prune. possibilities via cane a~alysls. In short, we believe in using as much evidence from formal systemn a~ seems understandable and reasonable, to constrain what the system should be doing. The Interaetlons Finally, we have been asked about the nature of the relationship between a gr~mar and a procedure for applying it. On the systems building side, cur feeling is that while one should be able to take a grammar and convert it to a recognition or generation procedure [I0], it is likely that such procedures will embody a whole set of principles that are control structure related, and not part of the grammar. For example, a gr',-mr seed not specify in what order to look for thln~s or in what order decisions should be made. Thus, one may not be able to reconstruct the grammar unlcuelv from a procedure for applying it. On the other hand, on the b,m- parsing side, we definitely feel that natural language is strongly tuned to both people's means of production and their means of recognition, and that principles llke MnDonalds ' Zndeliblllty Pr"Inoiple [13] or Marcus' Determinism Hypothesis [11] shape what are (and are not) seen an sentences of the language. REFERENCES I. Bobrow, R. J. The RUS System. BEN Report 3878, Bolt Beranek and Rewman Inc., 1978. 2. Bobrow, R. J. & Webber, B. L. PSI-ELONE- Parsing and Semantic Interpretation in the BBN Natural Language Understanding System. CSCSI/C~EI0 Annual Conference, CSC3I/CSEIO, 1980. 3. Bobrow, R. J. & Webber, B. L. Knowledge Representation for Syntactic/Semantic Processing. Proceedings of The First Annual National Conference on Artiticial Intelligence, American Association for Artificial Intelligence, 1980. 98 ~. Bobrow, R.J. & Webber, B.L. Parsing and Semantic Interpretation as an Incremental Recognition Process. Proceedings of a Symposium on Modelling Human Parsing Strategies, Center for Cognitive Science, University o[ Texas, Austin TZ, 1981. 5. Bobrow, R.J. & Webber, B.L. Systems Considerations for Search by Cooperating Processes: Providing Continually Ava/lable Output. Proceedings of the Sixth IJCAI, International Joint Conference on Artificial Intelligenoe, 1981. 6. Cohen, P. personal communication, videotape of experimental task 7. Eau-ley, J. An efficient context-fl'ee parsing algorithm. ~ of the ACM /~ (February 1970), 9~',- 102. 8. Gold~an-Eisler, F. Psyohologloal Heohanisms of Speech Produotlon as SSudled through the Analysis of Simultaneous Translation. In B. Butterworth, Ed., Lan~rn~e Production, Aoademlc Press, 1980. 9. Graham, S., Harrison, M. and Ruzzo, W. An Improved Context-Free Recognizer. ACM ~ on Pnom,-mm4,~ Lana~es and Systems (July 1980), "16- @63. 10. Kay, M. An Algorithm for Compiling Parsing Tables f~om a Grammar. Prooeedings of a Symposium on Modelling Human Parsing Strate~Les, Center for Cognitive Science, University of Texas, Austin TX, 1981. 11. MaPcus, M. A Theory of .qvntactic ~ for Mat~al Lan~e. MIT Press, 1980. 12. Mark, W. S. & Barton, G. E. The RUSGrammar Parsing System. GMR 32"3, General Motors Research Laboratories, 1980. 13. MoDonald, D. ???. Ph.D. Th., Massachusetts Institute o£ Technology, 1980. I,. ~orman, D. & Bobrow, D. On Data-ii~ted and Resource-llmlted ProoesSes. CSL7,-2, Xerox PARC, Msy, 197,. 99
1981
21
PARSING Ralph Grishman Dept. of Computer Science New York University New York, N. Y. One reason for the wide variety of views on many subjects in computational linguistics (such as parsing) is the diversity of objectives which lead people to do research in this area. Some researchers are motivated primarily by potential applications - the development of natural language interfaces for computer systems. Others are primarily concerned with the psychological processes which underlie human language, and view the computer as a tool for modeling and thus improving our understanding of these processes. Since, as is often observed, man is our best example of a natural language processor, these two groups do have a strong commonality of research interest. Nonetheless, their divergence of objective must lead to differences in the way they regard the component processes of natural language understanding. (If - when human processing is better understood - it is recognized that the simulation of human processes is not the most effective way of constructing a natural language interface, there may even be a deliberate divergence in the processes themselves.) My work, and this position paper, reflect an applications orientation; those with different research objectives will come to quite different conclusions. WHY PARSE? One of the tasks of computer science in general, and of artificial intelligence in particular, is that of coping in a systematic fashion with systems of high complexity. Natural language interfaces certainly fit that characterization. constituent structure, we can substantially simplify the specification of the subsequent stages of analysis. SPECIFICATION VS. PROCEDURE The arguments just given for parse trees (and other intermediate structures) are arguments for how best to specify the transformations which a natural language input must undergo. They are no___~targuments for a particular language analysis procedure. A direct imple- mentation of the simplest specifications does not necessarily yield the most efficient procedure; as our systems become more sophisticated, the distance from specification to implementation structure may increase. We should therefore favor formalisms which (because of their simple structure) can be automatically adapted to a variety of procedures. Among these variations are: PARALLEL PROCESSING. Phrase structure grammars and augmented phrase structure granm~rs lend themselves naturally to parallel parsing procedures - either top- down (following alternative expansions in parallel), bottom-up (trying alternative reductions in parallel), or a combination of the two. In particular, some of the parsing algorithms developed as part of the speech recognition research of the past decade are readily adaptable to parallel processing. To minimize parallel- ism, however, the grammatical constraints must be organized to minimize or at least postpone the inter- actions among the analyses of the various parts of a sentence. A natural language interface must analyze input sequence& communicate with some underlying system (data base, robo~ etc.), and generate responses. In the transition from the natural language input to the language of the under- lying system there is in principle no need to make explicit reference to any intermediate structures; we could write our interface as a (huge) set of rules which map directly from input sequences into our target language. We know full well, however, that such a system would be nearly impossible to write, and certainly impossible to understand or modify. By introducing intermediate structures, we are able to divide the task into more manageable components. ANALYSIS AND GENERATION. In the same way that sentence analysis involves a translation to a "deep structure," an increasing number of systems now include a generation component to translate from deep structure to sentences. If the mapping from sentence to deep structure is direct (without reference to a parse tree), the generation component may require a separate design effort. On the other hand, if the mapping is specified in terms of incremental transformations of the constituent structure, producing an inverse mapping may be relatively straight- forward (and the greater the non-procedural content of the transformations, the easier it should be to reverse them). Specific intermediate structures are of value insofar as they facilitate the expression of relationships which must be captured in the system - relationships which would be mere cumbersome to express using other repre- sentations. For example, the representations at the level of logical form (such as predicate calculus) are chosen to facilitate the computation of logical inference~ In the same way, a representation of constituent structure (a parse tree), if properly chosen, will facilitate the statement of many linguistic constraints and relationships. Grammatical constraints will enable the system to identify the pertinent syntactic category for many multiply classified words. Some constraints on anaphora (such as the notion of command) and on quantifier structure are also best stated in terms of surface structure. Equally important, many sentence relationships which must be captured at some point in the analysis (such as the relation between active and passive sentences or between reduced and expanded conjoinings) are most easily stated as transformations between constituent structures. By using syntactic transformations to regularize the AVOIDING THE PARSE TREE. To emphasize the distinction between specification and procedure, let me mention a possibility for an "optimizing" analyser of the future: one whose specifications are given in terms of trans- formations of the constituent structure followed by interpretation of the regularized ("deep") structure, but whose implementation avoids actually constructing a parse tree. Instead, the transformations would be applied to the deep structure interpretation rules, producing a (much larger) set of rules for interpreting the input sequences directly. Some small experiments have been done in this direction (K. Konolige, "Capturing Linguistic Generalizations with Grammar Metarules,"Pro___cc. 18th Ann'l Meetin~ ACL, 1979 ). By avoiding explicit construction of a parse tree, we could accelerate the analysis procedure while retaining the descriptive advantages of independent, incremental transformations of constituent structure. While development of any such automatic grawmar restructuring procedure would certainly be a difficult task, it does indicate the possibilities which open up when specification and implementation are separated. I01
1981
22
A View of Parsing Ronald M. Kaplan Xerox Pale Alto Research Center The questions before this panel presuppose a distinction between parsing and interpretation. There are two other simple and obvious distinctions that I think are necessary for a reasonable discussion of the issues. First, we must clearly distinguish between the static specification of a process and its dynamic execution. Second, we must clearly distinguish two purposes that a natural language processing system might serve: one legitimate goal of a system is to perform some practical ~sk efficiently and well. while a second goal is to assist in developing a scientific understanding of the cognitive operations that underlie human language processing. 1 will refer to pa~rs primarily oriented towards the former goal as Practical Parsers (PP) and refer to the others as Performance Model Parsers (PMP). With these distinctions in mind. let me now turn to the questions at hand. 1. The Computational Perspective. From a computadonal point of view. there are obvious reasons for distinguishing parsing from interpretation. Parsing is the process whereby linearly ordered scquences of character strings annotated with information found in a stored lexicon are transduced into labelled hierarchical structures. Interpretation maps such structures either into structures with different formal properties, such as logical formulas, or into sequences of actions to be performed on a logical model or database. On the face of it, unless we ignore the obvious formal differences between string--to--structure and structure--to--structure mappings, parsing is thus formally and conceptually distinct from interpretation. The specifications of thc two processes necessarily mention different kinds of operations that are sensitive to different- features of the input and express quite different generalizations about the correspondences betwecn form and meaning. As far as I can see. these are simply factual assertions about which there can be little or no debate. Beyond this level, however, there are a number of controversial issues. Even though parsing and interpretation operations are recognizably distinct, they can be combined in a variety of ways to construct a natural language understanding system. For example, the static specification of a s~stem could freely intermix parsing and interpretation operations, so that there is no part of the program text that is clearly identifiable as the parser or interpreter, and perhaps no part that can even be thought of as more pa~er-like or interpreter-like than any other. Although the microscopic operations fall into two classes, there is no notion in such a system of separate parsing and interpretation components at a macroscopic te~cl. .Macroscopiealty. it might be argued` a ,~yslcm specified in this way does not embody a parsmg/interprcmtitm distinctmn. On the other hand. we can imagine a system whose static specification is carefully divided into two parts, one that only specifies parsing operations and expresses parsing generalizations and one that involves only interpretation specifications. And there arc clearly untold numbers of system configurations that fall somewhere between these extremes. I take it to be uncontrovcrsial that. other things being equal, a homogenized system is less preferable on both practical and scientific grounds to one that naturally decomposes. Practically. such a system is easier to build and maintain, since the parts can be designed, developed, and understood to a certain extent in isolation, perhaps even by people working independently. Scientifically. a decomposable system is much more likely to provide insight into the process of natural language eomprehe~ion, whether by machines or people. The reasons for this can be found in Simon's classic essay on the Architecture of Complexity. and in other places as well. The debate arises from the contention that there are important "other things" that cannot be made equal, given a completely decomposed static specification. In particular, it is suggested that parsing and interpretation operations must be partially or totally interleaved during the execuuon of a comprehension process. For practical systems, arguments are advanced that a "habitable" system, one that human clients fecl comfortable using, must be able to interpret inputs before enough information is available for a complete syntactic structure or when the syntactic information that is available does not lead to a consistent parse. It is also argued that interpretation must be performed in the middle of parsing in the interests of reasonable efficiency: the interpreter can reject sub-constituents that are semantically or pragmatically unacceptable and thereby permit early truncation of long paths of syntactic computation. From the performance model perspective, it is suggested that humans seem able to make syntactic, semantic, and pragmatic decisions in parallel, and the ability to simulate this capability is thus a condition of adequacy for any psycholinguistic model. All these arguments favor a system where the operations of parsing and interpretation are interleaved during dynamic execution, and perhaps even executed on parallel hardware (or wetware, from the PMP perspective), If parsing and interpretation are run-time indistinguishable, it is claimed, then parsing and interpretation must be part and parcel of the same monolithic process. Of course, whether or not there is dynamic fusit)n of parsing and interpetation is an empirical question which might be answered differently for practical systems than for perlbrmance models, and might even be answered differently ior different practical implementations. Depending on the relative computational efficiency of parsing versus interpretation operations, dynamic intcrlc:ning might increase or decrease ovendl system efli:'ctivcness. For example, in our work t.n the I.UNAR system /Woods. Kaolan. & Nash-Webbcr. 1q72), we fl)tmd it more ellicient to detbr semantic prt~.cssmg until after a complete, well-l~.,nncd parse had been discovered. The consistency checks embedded in the grammar could rule out syntactically unacceptable structures much more quickly than our particular interpretation component was able to do. More recendy. Martin. Church. and Ramesh (1981) have claimed that overall efficiency is greatest if all syntactic analyses are computed in breadth-fi~t fashion before any semantic operations are executed. These results might be taken to indicate that the particular semantic components were poorly conceived and implemented, with little bearing on systems where interpretation is done "properly" (or parsing is done improperly). But they do make the point that a practical decision on the dynamic fusion of parsing and interpretation cannot be made a priori, without a detailed study of the many other factors that can influence a system's computational resource demands. Whatever conclusion we arrive at from practical considerations, there is no reason to believe that it will carry over to performance modelling. The human language faculty is an evolutiol, try compromise between the requirements that language be easy to learn, easy to produce, and easy to comprehend. Because of this. our cognitive mechanisms for comprehension may exhibit acceptable but not optimal efficiency, and we would therefore expect a successful PMP to operate with psychologically appropriate inefficiencies. Thus. for performance modelling, the question can be answered only by finding eases where the various hypotheses make crucially distinct predictions concerning human capabilities, errors, or profiles of cognitive load. and then testing these predictions in a careful series of psycholinguisttc experiments. It is often debated, usually by non-linguists, whether the recta-linguistic intuitions that form the empirical foundation for much of current linguistic theory are reliable indicators of the naUve speaker's underlying competence. When it comes to questions about internal processing as opposed to structural relations, the psychological literature has demonstrated many times that intuitions are deserving of even much less trust. Thus, though we may have strong beliefs to the effect that parsing and interpretation are psychologically inseparable, our theoretical commitments should rather be based on a solid experimental footing. At this point in time. the experimental evidence is mixed: semantic and syntactic processes are interleaved on-line in many situations, but there is also evidence that these processes have a separate, relatively non-interacting run-time coup. 103 However, no matter how the question of. dynamic fusion is ultimately resolved, it should bc clear t, ha[ dynamic interleaving or parallelism carries no implicauon of" static homogeneity. A system whose run-rune behavior has no distinguishable components may neverthelc~ have a totally dccompo~d static description. Given this possibilty, and given me evident scientific advantages that a dccornposed static spccifgation aflords. I have adopted in my own rescareh on these matters the strong working hypothesis that a statically deeomposahle sys~n co~ be constructed to provide the necessary efficiencics for practical purposes and ycL perhaps with minor modirr.ations and l'twther ~ipulations. Still supp(~n signilicant explanauons of. p~ycholingmstic phenomena. In short, I maintain the position that the "true" comprehension system will also meet our pre-theorctic notions of. scientific elegance and "beauty'. This hypothesis, that truth and beauty are highly correlated in this domain, is perhaps implausible, but it presents a challenge for theory and implementation that has held my interest and fascination for many years. 2. The Linguistic Perspective. While k is certainly Irue that our tools (computers and formal grammars) have shoged our views of" what human languages and human language preceding may be like, it seems a little bit strange to think that our views have been warped by those tools. Warping suggcsts, that there is rome other, more accurate view that we would have comc m either without mathematical or computational tools or with a set of formal tools with a substantially different character. There is no way in principle to exclude such a possibility, but it could hc tatar we have the tools wc have because they harmonize with the capabilities of the human mind for scientific understanding. That is. athough substantially different tools might be better suited to the phenomena under investigation, the results cleaved with [hose tools might not be humanly appreciable. "]'he views that have emerged from using our present tools might be far off the mark, but they might be the only views [hat we are c~hle OC Perhaps a more interesting statement can be made if the question is interpreted as posing a conflict between the views that we as computational linguists have come to. guided by our present practical and formal understanding of what constitutes a reasonable computation, and the views that [henretical linguisXs, philosophers, and others similarly unconstrained by concrete computation, might hold. Historically. computational Brammm~ have represented a mixture of intuitions about the significant gntctural generalizations of language and intuitions about what can be p ~ efT~:ientiy, given a pani-'ular implementation that the grammar writer had in the back of his or her mind. This is certainly [rue of my own work on some of the catty ATN grammars. Along with many others, I felt an often unconscious pressure to move forward along • given computational path as long as possible before throwing my gramnmtical fate to the purser's general nondeterntioLs~ c~oice mechanisms, even though [his usually meant that feaster contents had to be manipulated in linguistically unjustified ways. For example, the standard ATN account of" passive sentcnces used register operations to •void backtracking that would re.analyze the NP that was initially parsed as an active subject. However. in so doing, the grammar confused the notions of surfare and deep suh)eets, and lost the ability to express gcnendizations concerning, for examplc, passive tag questions. In hindsighL I con~der that my early views were "warped" by both the ATN formalism, with its powerful register operations, and my understanding of the particular top-down, le•right underlying pa~ing algorithm. As [ developed the more sophisticated model of parsing embodied in my General Syntactic Processor, l realized that [here was a systematic, non-fpamrr~*_~*~J way at" holding on to funcXionally mis-assigned constituent structures. Freed from worrying about exponential constituent su'ucture nondetermism, it became possible to restrict and simplify [he ATN's register oparaUons and, ultimately, to give them a non-proceduraL algebraic interpretation. The result is a new grammatical formalism, Lexical-Functiona] Grammar CKaplan & Bresnan, in press), a forrnalisan that admits a wider class of eff¢ient computational implementations than the ATN formalism just becat~ she grammar itself" makes fewer computational commi~nen~ Moreover, it is a 104 formalism that provides for the natural statement of" many language particular and universal gencralizations, h also seems to bc a formalism d'mt fatal/tales cooperaoon between linguists and computational linguists, despite the.~" diffcnng theoretical and me[hodologeaI bmses. Just as we have been warped by our computational mechanisms, linguists have been warped by their formal tools, particularly the r~ansformational formalism. The convergence represented by Lexical- Functional Grammar is heartening in that it suggests that imperfect tools and understanding can and will evolve into better tools and deeper insights. 3. The Interactions. As indicated •hove, I think computational grammars have been influenced by the algorithms that we expect to appb them with. While difficult w weed out, that influence is not a thcoretica] or practical oeces~ty. By reducing and eliminaong the computational commitments of Our grammaocal forn~ism, as we have done with Lexical-Functional Grammar, it is possible to devise a variety or different parsing schemes. By comparing and coou'asUng their behavior with different grammars and sentences, we can begin to develop a deeper understanding of [he way compulationa] resources depend on properties of grammars, smngs, and algorithms. This unders~nding is essenUal both to practic~ implementations and also to psycholinguistic modelling. Furthermore, if a formalism allows grammars to be written as an abstract characterization of string--structure correspondences, the Jp~nunm" should be indifferent as to recognition or generation. We should be •hie to implement fcasible generators as well as parsers, and again, shed light on the interdependencies of grammars and grammaucal prrx:cssmg, . Lc( me conclude with a few comments about the psychol,ogeaI validity or grammars and parsing algorithms. To the extent that a grammar cor~j.ly models a native speaker's lingtusuc compelcnce, or, less tend~Uously, the set of meta-linguistic judgments he is able to make. then ti'mt srammar has a certain psyehok~gical "validity'. h becomes much more interepang, however, if" it can •l~.J be cmpeddcd in a psychologeally accurate motel of speaking and comprehending, h.~ all cumpct¢,nce grammars will mcc~ [his additional requL,~ment, but I have the optLmis~c belief that such a grammar will ~ y be found. It is also possible to find psychological validation for a parsing algorithm in the •bsence of a particular Ipmnn~. One could in principle adduce evidence to [he effect that [he architecture of [he parser, the structuring of its memory and operations, corresponds point by point to well-e,.,.,.,.,.,.,.,.,~mhl~hed cognitive mectmnisms. As • research strategy for •fraying at a psychologically valid model of comprehension, it is much more reasonable to develop linguisr.ically justified 8rammars and computationaUy motivated pmT, ing algorithms in a collaborative effort. A model with such independently motivated yet mutually compatible knowledBe and process components is much more likely to resuh in an explanatory account of [he mechanisms underlying human linguisl~ abilil~=. References Kaplan, R. & Bres.oan, J. Lexical-functional grammar:. A fen'hal system for grammatical representation" In J. Bresnan ted.), The me;m~l repvecentalion of ~mmal~.ol rela,on~ Cambridse: MIT Press. in prem. Martin. W~ Church, K.. & P, ame~, P. Paper presented to the Symposium on Modelling Human Parsing Strategies, Unive~ty of Texas at Austin, ~z. Woods. W. Kaplan, R. & Nash-Wehber. B. The Lunar sr/ences nalum/ language information .Wslem. Cmnbridsc: Belt "Ikranek and Newnlan` Report 2378, 1972.
1981
23
PERSPECTIVES ON PARSING ISSUES Christopher K. Riesbeck Yale University COMPUTATIONAL PEESPECTT VE IS IT USEFUL TO DISTINGUISH PARSING FROM INTERPRETATION? Since most of this position paper viii be attacking the separation of parsing from interpretation, let me first make it clear that I do believe in syntactic knowledge. In this I am more conservative than other researchers in interpretation at Berkeley, Carnegie-Mellon, Colombia, the universities of Connecticut and Maryland, and Yale. But believing in syntactic knowledge is not the same as believing in parsers! The search for a way to assign 8 syntactic structure to a sentence largely independent of the meaning of that sentence has led to a terrible misdirection of labor. And this effect has been felt on both sides of the fence. We find ourselves looking for ways to reduce interaction between syntax and semantics as much as possible. How far can we drive a purely syntactic (semantic) analyzer, without sneaking over into the enemy camp? Row well can we disguise syntax (semantics) as semantics (syntax)? How narrow a pipe between the two can we set away with? What a waste of time, when we should be starting with bodies of texts, considering the total language analysis picture, and looking for what kinds of knowledge need to interact to understand those texts. If our intent in overextendins our theories was to rest their muscle, then I would have no qualms. Pushing a mechanism down a blind alley is an important way to study its weaknesses. But I really can't accept this Popperian view of modern computational linguistics. Mechanisms are not driven beyond their limits to find those limits, but rather to grab territory from the other side. The underlying premise is "If our mechanism X can sometimes do task A, then there is no need for someone else's mechanism Y." Occam's razor is used with murderous intent. Furthermore, the debate over whether parsers make sense has drastically reduced interaction between researchers. Each side sees the other as avoiding fundamental issues, and so the results from the other side always seem to be beside the point. For example, when Mirth Marcus" explains some grmamatical constraint as syntactic processing constraints, be doesn't answer any of the problems I'm faced with. And I'm sure Mitch has no need for frame-based, domain-driven partial language analysis techniques. This situation has not arisen because we have been forced to specialize. We simply don't know enough to qualify for an information explosion yet. Computational linguistics doesn't have hundreds of journals in dozens of languages. It's a young field with only a handful of people working in it. Nor is it the case that we don't have things to say to each other. But -- end here's the rub -- some of the most useful things that each of us knows are the things that we don't dare tell. By that I mean that each of us knows where our theories fall apart, where ve have to kludge the programs, fudge the inputs, or wince at the outputs. That kind of information could be invaluable for suggesting to the others where to focus their attentions. Unfortunately, even if we became brave enough to talk about, even emphasize, where we're having problems, the odds are low that we would consider acceptable what someone else proposes as a solution. IS SIMULATION OF HUMAN PROCESSING IMPORTANT? Yes, very much so, even if all you are interested in is a good computer program. The reason why was neatly captured in ~rinciDles of Artificia~ lnte~lieence: "language has evolved as a c~unication medium between intelliaen~ beings" (Nilsson, p. 2). That is, natural language usage depends on the fact that certain things can be left ambiguous, left vague, or just left out, because the hearer knows almost as much as the speaker. Natural language has been finely tuned to the co-,-unicative needs of human beings. We may have to adapt to the limitations of our ears and our vocal chords, but we have otherwise been the masters of our language. This is true even if there is an innate universal grmmuar (which I don't believe in). A universal grammar applies few constraints to our use of ellipsis, ambiguity, anaphora, and all the other aspects of language that make language an efficient means for information transfer, end a pain for the progr----er. Because language has been fitted to what we do best, I believe it's improbable that there exist processes very unlike what people use to deal with it. Therefore, while I have no intention of trying to model reaction time data points, I do find human behavior important for two kinds of information. First, what do people do well, how do they do it, and how does language use depend on it? Second, what do people do poorly, and how does language use get around it? The question '~ow can we know what human processing is really like?" is a non-issue. We don't have to know what human processing is really like. But if people can understand texts that leave out crucial background facts, then our programs have to be able to infer those facts. If people have trouble understanding information phrased in certain ways, then our programs have to phrase it in ways they can understand. At some level of description, our programs will have to be "doing what people do," i.e., filling in certain kinds of blanks, leaving out certain kinds of redundancies, and so on. But there is no reason for computational linguists to worry about how deeply their programs correspond to human processes. WILL PARALLEL PROCESSING CHANGE THINGS? People have been predicting (and waiting for) great benefits from parallelism for some time. Personally, I believe that most of the benefits will come in the area of interpretation, where large-scale memory search, such as Scott Fahlman has been worrying about, are involved. And, if anything, improvements in the use of semantics will decrease the attractiveness of syntactic parsing. But I also think that there are not that many gains to be had from parallel processing. Hash codings, discrimination trees, and so on, already yield reasonably constant speeds for looking up data. It is an inconvenience to have to deal with such things, but not an insurmountable obstacle. Our real problems at the moment are how to get our systems to make decisions, such as "Is the question "How many times has John asked you for money?" rhetorical or not?" We are limited not by the number of processors, but by not knowing how to do the job. 105 TtI.~E LINGUISTIC PERSPECTIVE HAVE OUR TOOLS AFFECTED US? Yes, and adversely. To partially contradict my statements in the last paragraph, we've been overly concerned with how to do things with existing hardware and software. And we've been too impressed by the success computer science has had with syntax-driven compilation of programming languages. I1 is certainly true that work on grammars, parsers, code generators, and so on, have changed compiler generation from maesive multi-man-year endeavors to student course projects. If compiler technology has benefited so much from syntactic parsers, why can't computational linguistics? The problem here is that the technology has not done what people think it has. It has allowed us to develop modern, well-structured, task-oriented languages, but it has not given us natural ones. Anyone who has had co teach an introductory progru~ing course knows that. High-level languages, though easier to learn than machine language, are very different from human languages, such as English or Chinese. Programming languages, to readjust Nilsson's quote, are developed for c~unication between morons. All the useful features of language, such as ellipsis and ambiguity, have to be eliminated in order co use the technology of syntax-driven parsing. Compilers do not point the way for computational linguistics. They show instead what we get if we restrict ourselves to simplistic methods. DO WE PARSE CONTEXT-FREELY? My working assumption is that the syntactic knowledge used in comprehension is at most context-free and probably a lot less, because of memory limitations. This is mostly a result of semantic heuristics taking over when constructions become too complex for our cognitive chunking capacities. But this is not a critical assumption for me. ; ~rE~AC'~ ONS Since I don't believe in the pure gran~atical approach, I have to replace this last set of questions with questions about the relationship between our knowledge (linguistic and otherwise) and the procedures for applying i1. Fortunately, the questions still make sense after this substitution. DO OUR ALGORITHMS AFFECT OUR KNOWLEDGE STRUCTURES? Of course. In fact, it is often hard to decide whether some feature of a system is a knowledge structure or a procedural factor. For example, is linear search a result of data structures or procedure designs? CAN WE TEST ALGORITHMS/KNOWLEDGE STRUCTURES SEPARATELY? We do indeed try experiments based on the shape of knowledge structures, independently of bow they are used (but I think that most such experiments have been inconclusive). I'm not sure what it would mean, however, for a procedure to be validated independently of the knowledge structures it works with, since until the knowledge structures were right, you couldn't tell if the procedure was doing the right thing or not. WHY DO WE SEPARATE RECOGNITION AND PRODUCTION? If I were trying to deal with this questio n on Erratical grounds, I wouldn't know what it meant. Cr~ars are not processes and hence have no direction. They are abstract characterizations of the set of well-formed strings. From certain classes of gra-w-ars 106 one can mechanically build recognizers and rando~ generators. But such machines are not the gra-~ars, and a recognizer is manifestly not the same machine as a generator, even though the same grammar may underlie both. Suppose ve rephrase the question as '~hy do we have separate knowledge structures for interpretation and production?" This presupposes that there are separate knowledge structures, and in our current systems this is only partially true. Interpreting and production programs abound in ad hoc procedures that share very little in common near the language end. The interpreters are full of methods for guessing at meanings, filling in the blanks, predicting likely follow-ups, and so on. The generators are full of methods for eliminating contextual items, picking appropriate descriptors, choosing pronouns, and so on. Each has a very different set of problems to deal with. On the other hand, our interpreters and generators do share what we think is the important stuff, the world knowledge, without which all the other processing wouldn't be worth a partridge in a parse tree. The world knowledge says what makes sense in onderstandins and what is important to talk about. Part of the separation of interpretation and generation occurs when the programs for each are developed by different people. This Tesults in unrealistic systems that write what they can't read and read what they can't write. Someday we'll have a good model of how knowledge the interpreter gains about understanding a new word is converted to knowledge the generator can use to validly pick that word in production. This viii have account for how we can interpret words without being ready to use them. For example, from a sentence like "The car swerved off the road and struck a bridge abutment," we can infer that an abutment is a noun describing some kind of outdoor physical object, attachable to a bridge. This would be enough for interpretation, but obviously the generator will need co know more about what an abutment is before it could confidently say "Oh, look at the cute abutment!" A final point on sharing. There are two standard arguments for sharing at least gr=mmatical information. One is to save space, and the other is to maintain consistency. Without claiming that sharing doesn't occur, I would like to point out that both arguments are very weak. First, there is really not a lot of grammatical knowledge, compared against all the other knowledge we have about the world, so not that much space would be saved if sharing occurred. Second, if the generator derives it's linguistic knowledge from the parser's data base, then we'll have as much consistency as we could measure in people anyway. REFERENCE Nilsson, H. (1980). Princinle~ of Artificia~ Intellisence. Tioga Publishing Co, Palo Alto, California.
1981
24
PRESUPPOSITION AND IMPLICATURE IN MODEL-THEORETIC PRAGMATICS Douglas B. Moran Oregon State University Model-theoretic pragmatics is an attempt to provide a formal description of the pragmatics of natural language as effects arising from using model-theoretic semantics in a dynamic environment. The pragmatic phenomena considered here have been variously labeled ~resupposition [I] and eonven¢ional implicature [6]. The models used in traditional model-theoretic semantics provide a complete and static representation of knowledge about the world, llowever, this is not the environment in which language is used. Language is used in a dynamic environment - the participants have incomplete knowledge of the world and the understanding of a sentence can add to the knowledge of the listener. A formalism which allows models to contain incomplete knowledge and to which knowledge can be added has been developed [2, 3, 12]. In model-theoretic semantics, the relationships between words is not inherent in the structure of the model. These relationships between words are given by logical formulas, called meaning postulazes. In traditional model-theoretic semantics (with static models), these meaning postulates can be evaluated when the model is chosen to insure that it is a Peasonable model for the language. In dynamic model-theoretic semantics, these relationships must be verified as information is added to the model to insure that the new information does not violate any of these relationships. This verification process may cause the addition of more information to the model. The processing of the formula representing a sentence adds to the dynamic model the information given as the assertion of the sentence - the pr~maz~j information of the sentence - if it is not already in the model. The addition of this primary information can cause - through the verification of a meaning postulate - the addition of 8econ~x~ information. This secondary information is not part of the assertion 0£ the sentence, but is needed in the processing of the assertion. This characterization of secondary information is very similar to the classical definition of presupposition [I]. This approach displays different behavior for the three different cases of information contained in the model. In the first case, neither the assertion nor the pre- suppositions and implicatures are known. The attempt to add the assertion activates the verification of the meaning postulates giving the presuppositions and implicatures, thus causing that secondary information to be added to the model as a prerequisite to the addition of the primary information. In the second case, the presuppositions and implicatures are known (either true or false) and the assertion is unknown. The attempt to add the primary information again activates the verification of the meaning postulates. However, in this case, the presuppositions and implicatures are simply being checked - the verification process is not interrupted to add this secondary information to the model. This case corresponds to what Grice and others have termed to be a well-structured conversation. In the third case, the assertion of the sentence is known to be true or false. Since no new information needs to be added to the model to process the semantic represen- tation of the sentence, the verification of meaning postulates is not activated. The presuppositions and implicatures need not be verified because they had to have been verified before the assertion of the sentence or its negation could have been entered into the model. The presuppositions and implicatures of subordinate clauses do not necessarily become presuppositions and implicatures of the whole sentence. The problem of when and how such presuppositions become those of the matrix sentence is known as the pPoSeotion problem [13]. The system described here provides a simple and motivated solution to the projection problem. The models used in this system are partial models; a clause which has a presupposition or implicature which is not true has an undefinable denotation. An intensional logic [ii] is used to provide the semantic representations of sentences and the intensionality establishes transparent and opaque contexts (hoLg8 and plug8 [7]) which determine whether or not an undefinable value indicating the failure of a presupposition for a subordinate clause can propagate and force the matrix sentence to have an undefinable value. In the case where the presuppositions and implicatures are projected up from the subordinate clause to the matrix sentence, undefinable values are allowed to propagate, and thus a failure of a projected pre- supposition or implicature affects not only the sub- ordinate clause in which it originates, but also the matrix sentence. The determination of the projection characteristics is claimed to be an integral part of the meanings of words and not a separable feature. There are two other major attempts to handle pre- suppositions and implicatures in a model-theoretic framework. Karttunen and Peters [g, 9, 10] produce a formula giving the conventional implicatures of a sentence from its syntactic structure. Gazdar [4, S] accumulates sets of propositions, cancelling out those which are incompatible. Moran [12] compares the approach taken here to that of Karttunen and Peters and shows how this approach is simpler and better motivated. Gazdar's system is broader, but this approach is shown to correctly handle sentences which are incorrectly handled by Gazdar, and ways are suggested to expand the coverage of this system. REFERENCES [I] G. Frege (1892), "On sense and reference", in P. Geach and M. Black (eds.) (1966), Translations from the Philosophical Writings of Gottlob Frege, Blackwell, Oxford, 56-78. [2] J. Friedman, D. Moran, and D. ~arren (1978), "Explicit finite intensional models for PTQ", American Journal of Computational Linguistics, microfiche 74, 23-96. [3] J. Friedman, D. Moran and D. Warren (1979), "Dynamic Interpretations", Computer Studies in Pormal Linguistics N-16, Department of Computer and Communication Sciences, The University of Michigan; earlier version presented to the October 1978 Sloan Foundation Workshop on Formal Semantics at Stanford University. [4] G. Gazdar (1979), Pragmatics: Implicature r Presupposition~ and Logical Form, Academic Press, New York. [5] G. Gazdar (1979), "A solution to the projection problem", in Oh and Dinneen (eds.), 57-89. [6] H. Grice (1975), "Logic and conversation", in P. Cole and J. Morgan (eds.) Syntax and Semantics 3: Speech Acts, Academic Press, New York, 41-58. 107 [73 L. Karttunen (1973), "Presuppositions of compound sentences", Linguistic Inquiry, ~, 169-193. [83 L. Karttunen and 5. Peters (1975], "Conventional implicature in Montague GraEmar", Berhelev Linguistic Societ[, !, 266-278. [93 L. Karttunen and S. Peters (1976), "What indirect questions conventionally implicate", Chica~o Linguistic 5ocietz, 12, 351-568. [I03 h. Karttunen and 5. Peters (1979), "Conventional implicatures", in Oh and Dinneen (eds.), 1-56. [ii] ~. Montague (1975~, "The proper treatment of quantification in ordinary £nglish", in J. Hintikka, J. Moravcsik and P. Suppes [eds.) Approaches to Natural Language, D. Reidel, Dordrecht, 221-242; reprinted in R. Montague (1974), Formal Philosoph[: Selected Papers of Richard Monta~ue, edited and with an introduction by Richmond Thomason, Yale University Press, 247-270. [123 D. Moran (1980), Model-Theoretic Pra~quatics: D~namic Models and an Application to Presupposition and lmplicature, unpublished Ph.D. dissertation, Department of Computer and Communication Sciences, The University of Michigan. [133 J. Morgan (1969), "On the treatment of presupposition in transformational grammar", Chicago Linguistic Society, ~, 167-177. [143 C. Oh and D. Dinneen (eds.), Syntax and Semantics Ii: Presupposition, Academic Press, New York. 108
1981
25
SOME COMPUTATIONAL ASPECTS OF SITUATION S~21ANTICS Jon Barwise Philosophy Department Stanford Unlverslty~ Stanford, California Departments of Mathematics and Computer Science University of Wisconsin, Madison, Wisconsin Can a realist model theory of natural language be computationally plausible? Or, to put it another way, is the view of linguistic meaning as a relation between expressions of a natural language and things (objects, properties, etc.) in the world, as opposed to a relation between expressions and procedures in the head. consistent with a computational approach to understanding natural language? The model theorist must either claim that the answer is yes, or be willing to admit that humans transcend the computatlonally feasible in their use of language? Until recently the only model theory of natural language that was at all well developed was Montague Grammar. Unfortunately, it was based on the primitive notion of "possible world" and so was not a realist theory, unless you are prepared to grant that all possible worlds are real. Montague Grammar is also computatlonally intractable, for reasons to be discussed below. John Perry and I have developed a somewhat different approach to the model theory of natural language, a theor~ we call "Situation Semantics". Since one of my own motivations in the early days of this project was to use the insights of generalized racurslon theory to find a eomputatlonally plausible alternative to Montague Grammar, it seems fitting to give a progress report here. I. MODEL-THEORETIC SEMANTICS "VERSUS" PROCEDURAL SEMANTICS First, however, l can't resist putting my two cents worth into this continuing discussion. Procedural semantics starts from the observation that there is something computational about our understanding of natural language. This is obviously correct. Where some go astray, though, is in trying to identify the meaning of an expression with some sort of program run in the head. But programs are the sorts of things to HAVE meanings, not to BE meanings. A meaningful program sets up some sort of relationship between things - perhaps a function from numbers to numbers, perhaps something much more sophisticated. But it is that relation which is its meaning, not some other program. The situation is analogous in the case of natural language. It is the relationships between things in the world that a language allows us to express that make a language meaningful. It is these relationships that are identified with the meanings of the expressions in model theory. The meaningful expressions are procedures that define these relations that are their meanings° At least this is the view that Perry and I take in situation semantics. With its emphasis on situations and events, situation semantics shares some perspectives with work in artificial intelligence on representing knowledge and action (e.g., McCarthy and Hayes, 1969), but it differs in some crucial respects. It is a mathematical theory of linguistic meaning, one that replaces the view of the connection between language and the world at the heart of Tarski-style model theory with one much more like that found in J.L. A-stln's "Truth". For another, it takes seriously the syntactic structures of natural language, directly interpreting them without assuming an intermediary level of "logical form". 2. A COMPUTATION OBSTRUCTION AT THE CORE OF ~IRST-ORDER LOGIC The standard model-theory for first-order logic, and with it the derivative model-theory of indices ("possible worlds") used in Montague GrA~r is based on Frege'a supposition that the reference of a sentence could only be taken as a truth value; that all else specific to the sentence is lost at the level of reference. As Quine has seen most clearly, the resulting view of semantics is one where to speak of a part of the world, as in (1). is to speak of the whole world and of all things in the world. (I) The dog with the red collar belongs to my son. There is a philosophical position that grows out of this view of logic, but it is not a practlc~l one for those who would implement the resulting model-theory as a theory of natural language. Any treatment of (I) that involves a universal quantification over all objects in the domain of discourse is doom"d by facts of ordinary discourse, e.g., the fact that I can make a statement llke (I) in a situation to describe another situation without making any statement at all about other dogs that come up later in a conversation, let alone about the dogs of Tibet. Logicians have been all too ready to dismiss such philosophical scruples as irrelevant to our task-- especially shortsighted since the same problem is well known to have been an obstacle in developing recurslon theory, both ordinary recur sion theory and the generalizations to other domains like the functions of finite type. We forget that only in 1938, several years after his initial work in recurslon theory, did K/eene introduce the class of PARTIAL recurslve functions in order to prove the famous Zecurslon Theorem. We tend to overlook the significance of this move, from total to partial functions, until its importance is brought into focus in other contexts. This is Just what happened when Kleene developed his recurslon theory for functions of finite type. His initial formulation restricted attention to total functlons, total functions of total functlons, etc. Two very important principles fail in the resulting theory - the Substitution Theorem and the First Recurslon Theorem. This theory has been raworked by Platek (1963), Moschovakls (1975), and by Kleene (1978, 1980) using 109 partial functions, partial functions of partial functions, etc., as the objects over which computations take place, imposing (in one way or another) the following constraint on all objects F of the theory: Persistence of Computations: If s is a partial function and F(s) is defined then F(s') m F(s) for every extension s" of a. In other words, it should not be possible to invalidate s computation that F(s) - a by simply adding further information to s. To put it yet another way, computations involving partial functions s should only be able to use positive information about s, not information of the form that s is undefined at this or that argument. To put it yet another way, F should be continuous in the topology of partial information. Computatlonally, we are always dealing with partial information and must insure persistence (continuity) of computations from it. But thls is just what blocks a straightforward implementation of the standard model- theory--the whollstic view of the world which it is committed to, based on Frege's initial supposition. When one shifts from flrst-order model-theory to the index or "possible world" se~antics used in ~ionta~e's semantics for natural language, the whollstlc view must be carried to heroic lengths. For index semantics must embrace (as David Lewis does) the claim that talk about a particular actual situation talks indirectly not Just about everything which actually exists, but about all possible objects and all possible worlds. And It is just thls point that raises serious difficulties for Joyce Friedman and her co-workers in their attempt to implement ~iontague Grammar in a working system (Friedman and Warren, 1978). The problem is that the basic formalization of possible world semantics is incompatible wlth the limitations imposed on us by partial information. Let me illustrate the problem thec arises in a very simple instance. In possible world semantics, the meaning of a word llke "talk' is a total function from the set I of ALL possible worlds to the set of ALL TOTAL functions from the set A of ALL possible individuals to the truth values 0, i. The intuition is that b talks in 'world" i if meaning('talk')(1)(d) - i. It is built into the formalism that each world contains TOTAL information about the extensions of all words and expressions of the language. The meaning of an adverb llke "rapidly" is a total function from such functions (from I into Fun(A,2)) to other such functions. Simple arithmetic shows that even if there are only I0 individuals and 5 possible worlds, there are (iexpSO)exp(iexpSO) such functions and the specification of even one is completely out of the question. The same sorts of problems come up when one wants Co study the actual model-theory that goes wlth MontaEue Semantics, as in Gallin's book. When one specifies the notion of a Henkln model of intenslonal logic, it must be done in a totally "impredlcatlve" way, since what constitutes an object at any one type depends on what the objects are of other types. For some time I toyed with the idea of giving a semantics for Hontasue's logic via partial functions but attempts convinced me that the basic intuition behind possible worlds is really inconsistent wlth the constraints placed on us by partial information. At the same tlme work on the semantics of perception statements led me away from possible worlds, while reinforcing my conviction that it was crucial to represent partial information about the world around us, information present in the perception of the scenes before us and of the situations in which we find ourselves all the time. 3. ACTUAL sITUATIONS AND SITUATION-TYPES The world we perceive a-~ talk about consists not just of objects, nor even of just objects, properties and relations, hut of objects having properties and standing in various relations to one another; that is, we perceive and talk about various types of situations from the perspective of other situations. In situation semantics the meanlng of a sentence is a relation between various types of situations, types of discourse situations on the one har~ and types of "subject matter" sltuatio~s on the other. We represent various types of situations abstractly as PARTIAL functions from relations and objects to 0 and I. For example, the type s(belong, Jackie, Jonny) = 1 s(dog, Jackie) " l s(smart, Jackle) = 0 represents a number of true facts about my son, Jonny, and his dog. (It is important to realize that s is taken to be a function from objects, properties and relations to 0,I, not from words to 0,Io) A typical sltuatlon--type representing a discourse situation might be given by d(speak, Bill) = I d(father, Bill, Alfred) - i d(dog, Jackle) " I representing the type of discourse situation where Bill, the father of Alfred, is speaking and where there is a single dog, Jackie, present. The meaning of (2) The dog belongs to my son is a relation (or ,-tlti-valued function) R between various types of discourse situations a~d other types of situations. Applied to the d above R will have various values R(d) including s" given below, but not including the s from above: s'(belong, Jackie, Alfred) m 1 s'(tall, Alfred) = i. Thus if Bill were to use this sentence in a situation of type d, and if s, not s', represents the true state of affairs, then what Bill said would be false. Lf s" represents the true state of affairs, then what he said would be true. Expressions of a language heve a fixed llngulstlc meanlng, Indepe-~enC of the discourse situation. The same sentence (2) can be used in different types of discourse situations to express different propositions. Thus, we can treat the linguistic meaning of an expression as a function from discourse si~uatlon types to other complexes of objects a -a properties. Application of thlS function to a partioular discourse situation type we call the interpretation of the expression. In particular, the interpretation of a sentence llke (2) in a discourse situation type llke d iS a set of various situation types, including s* shove, but not including s. This set of types is called the proposition expressed by (2). Various syntactic categories of natural language will have various sorts of interpretations. Verb phrases, e.g., will be interpreted by relations between objects and situation types. Definite descriptions will he interpreted as functions from situation types to individuals. The difference between referential and attributive uses of definite descriptions will correspond to different ways of using such a function, evaluation at s particular accessible situation, or to constrain other types within its domain. ii0 4. A FRAGMENT OF ENGLISH INVOLVING DEFINITE AND INDEFINITE DESCRIPTIONS At my talk I will illustrate the ideas discussed above by presenting a grammar and formal semantics for a fragment of English that embodies definite an d indefinite descriptions, restrictive and nonrestrictive relative clauses, and indexlcals llke "I", "you", "this" and "that". The aim is to have a semantic account that does not go through any sort of flrst-order "logical form", but operates off of the syntactic rules of English. The fragment incorporates both referential and attributive uses of descriptions. The basic idea is that descriptions are interpreted as functions from situation types to individuals, restrictive relative clauses are interpreted as functions from situation types to sub-types, and the interpretation of the whole is to be the composition of the functions interpreting the parts. Thus, the interpretations of "the", "dog", and "that talks" are given by the following three functions, respectively: f(X) = the unique element of X if there is one, - undefined, otherwise. g(s) - the set of a such that s(dos, a)-I h(s) - the "restriction' of s to the set of a such that s(talk,a)-l. The interpretation of "the dog that talks" is Just the composition of these three functions. From a logical point of view, this is quite interesting. In first-order logic, the meaning of "the dog that talks' has to be built up from the meanings of 'the' and 'dog that talks', not from the meanings of "the dog* and 'that talks'. However, in situation semantics, since composition of functions is associative, we can combine the meanings of these expressions either way: f.(g.h) - (f.g).h. Thus, our semantic analysis is compatible with both of the syntactic structures argued for in the linguistic literature, the Det-Nom analysis and the NP-R analysis. One point that comes up in Situation Semantics that might interest people st this meeting Is the reinterpretaclon of composltlonality that it forces on one, more of a top-down than a bottom-up composltionallty. This makes it much more computatlonally tractible, since it allows us to work with much smaller amount of information. Unfortunately, a full discussion of this point is beyond the scope of such a small paper. Another important point not discussed is the constraint placed by the requirement of persistence discussed in section 2. It forces us to introduce space-time locations for the analysis of attrlbutive uses of definlte descriptions, locations that are also needed for the semantics of tense, aspect and noun phrases like "every man', "neither dog', and the Ilk,. 5. CONCLUSION The main point of this paper has been to alert the readers to a perspective in the model theory of natural language which they might well find interesting and useful. Indeed, they may well find that it is one that they have in many ways adopted already for other reasons. REFERENCES I. J.L. Austin, "Truth", Philosophical Papers, Oxford, 1961, 117-134. 2. J. Barvise, "Scenes and other situations", J. of Philosophy, to appear, 1981. 3. J. Barwise end J. Perry, "Semantic innocence and uncoap rom/s t~, situations", Midwest Studies in Philosophy V~I, to appear 1981. 4. J. Barvise and J. Perry, Situation Se.~,ntics: A Mathematical Theory of Lin6uistic Meaning, book in preparation. 5. J. Friedman and V.S. Warren, "A parsln8 ,us,hod for Hontague Grammars," IAnsulstlcs and Philosophy, 2 (1978), 347-372. 6. S.C. Kleene, "Recurslve functionals and quantlflers of finite type revisited I", Generalized gecurslon Theory 1__I, North Holland, 1978, 185-222; and part II in The Kleene S~nposium, North Holland, 1980, 1- 31. 7. J. McCarthy, "Programs with common sense". Semantic Inforwa. tlon Processing, (Minsky, ed.), M.I.T., 1968, 403-418. 8. R. Moo,ague, "Universal Grammar", Theorla, 36 (1970), 373-398. 9. Y.N. Moschovakls, "On the basic notions in the theory of induction", Logic, Foundations of Methe,aatice and Co~utabllit~" Theory, (Butts and Hintikka, ed), Reid, l, 1976, 207-236. I0. J. Perry, "Perception, action and the structure of bellevlng", to appear. II. R. Platek, "Foundations of Recursloo Theory", Ph.D. Thesis, Stanford University, 1963. 111
1981
26
A SITUATION SEMANTICS APPROACH TO THE ANALYSIS OF SPEECH ACTS 1 David Andreoff Evans Stanford University 1. INTRODUCTION During thc past two decades, much work in linguistics has focused on sentences as minimal units of communication, and the project of rigorously characterizing the structure of sentences in natural language has met with some succcss. Not surprisingly, however, sentcnce grammars have contributed little to the analysis of discourse, Human discourse consists not just of words in sequences, hut of words in sequences directed by a speaker to an addressee, used to represent situations and to reveal intentions. Only when the addressee has apprehcndcd both these aspects of the message communicated can the message be interpretecL The analysis of discourse that emerges from Austin (1962), grounded in a theory of action, takes this view as ccntral, and thc concept of thc speech act follows naturally. An utterance may have a conventional meaning, but the interpretation of the actual meaning of the utterance as it is used in discourse depends on evaluating thc utterance in the context of the set of intentions which represcnt the illocutionary mode of its presentation. Put another way (paraphrasing Searle (1975:3)), the speaker's intention is to produce understanding, consisting of the knowledge of conditions on the speech act being pcrformed. If we are to take scrionsly Scarle's (1969:16) assertion that "the unit of linguistic communication is not ... the symbol, word, or sentence, ... but rather thc production or the issuance of thc symbol, word or sentence in the performance of the spcech act." then wc should be able to find some formal method of characterizing speech acts in discourse. Unfortunately, linguists have too often employed speech acts as taxonomic convonicnces, as in Dora (1977). Labor and Fanshel (1977), and elscwhcre, without attempting to give anything more than a descriptive definition. Only in the atlJficial intelligencc literature, notably in the work of Allcn, Bruce, Cohcn, and Pcrrauh (e.g. Allen (1979), Bruce and Newman (1978). Cohen and Perrault (1979), Cohen (1978). Perrault, Allen, and Cohcn (1978)), does onc find an attempt to dcfinc spcech acts in terms of more gcncral processes, here specifically, opcrations on planning networks. 2. TYPES OF SPEECH ACTS A great problem for the computational linguist attempting to find a formal representation for speech acts is that thc set of speech acts does not map uniformly onto the set of sentences. In terms of "guodncss of fit" with sentences, sevcral types of speech acts can be described. One type, the so- called pcrformatives, including ASSERT, DECLARE, etc.. can be ¢ffected in a single utterance. But even some of these can undcrgo further decomposition. For example, assuming.that the usual felicity conditions hold (of. Searte (1969:54f0), both (1) and (2) below can count as an apology, though neither sentence in (2) alone has the effect which their combination achieves. (1) 1 apologize for what I did. (2) I did a terrible thing. I'm very sorry. In (2), the first sentence contributes to the effect of an apology only to the extent that an addressee can infer that it is intended as part of an apology. The second sentence, which makes overt the expression of contrition, also exp~sses the sinccrity which is prerequisite for a felicitouS apology. But its success, too. depends on an int'crence by tile addressec that it is intended as part of an apology. If the addressee cannot make that inference -- because, for cxamplc, the address¢c hctieves that the speakcr is speaking sarcastically -- the effect of the apology is lost not only for the second sentence, but for the first as well. In this case, the illocutionary effect APOI,OGIZE can be regarded as supra-scmcntial, though, as in (1). appropriate single scntences can be used to achieve its effect. There are other types of speech acts, however, that cannot bc performed in single utterances, but require several or even many utterances. For example. DEFEND (as in a lawyer's action on ~half of his client), REFUTE (as in polemical argumentation) and PROVE (as in demonstraung effccm from specific causcs) cannot be cffected as pcrfnrmatives: one cannot make a refutation by uttering the words. I refute ,V, as one might make an assertion by uttering thc words, I a.~ert X. One might wonder whether these supra-utterance modes should count as speech acts. Certainly. the term "spcech act'" has ffaditionally been used in reference to single sentences or to certain classes of non-scntenciaJ expressions which have single utterance indcpcndcncc in discourse (e.g. Hello). But consider again the traditional definition, paraphrasing Scarle (1969:4gff), a speech act is the use of an utterance directed at an addreasce in the scrvicc of a set of intentions, namely, 1.) thc intention to producc a certain illocutionary effect in the addressee, 2.) the intention to produce this effect by getting the addressee to recognize the intention to produce the effect, and 3.) thc retention to produce this recognition by means of the addrcsaee's knowicdge of the rules governing the utterance. There is nothing in this characterization that requires that utterance be understood as scntencc. "ll~e crucial point is that the utterance (of whatever length) serve the set of intentions represented by 1.) - 3.). A valid speech act can bc regarded as defining an illocntionary mode which is govcrnod by conventions which constrain thc sorts of interpretations that can be givcn to utterances which occur within that mode (including our judgmcnts un their appropriateness). Thcsc convcnUons also dcfinc the conditions that must be met for thc targct cffect to bc achieved, Thus for the utterance / will be home by noon to count as a promise (and not. say. as a prediction), it must bc viewed as an utterance iasucd in the illocutionary mode of promising, wllich not only defineS ccrtain well- formcdncss conditions on the utterance itself (making statemcnt,s in the past tense -- e.g. ! war home by noon .- impossible as direct speech act promises2), but also givcs the criteria which determine whether the act is successful (including the felicity conditions, e¢.). Similarly, for a series of utterances to count as a refutation, they must be seen as operating in the illocutionary mode of rcfutation, as for example, in thc text below: (3) You have stated that 2 + 2 = 3. But take any two individual objects and any other two individual objectx and place them in a row. Then count them. say. from left to righL What do you get? Not 3 but 4. Therefore; 2 + 2 cannot equal 3. We cannot interpret any of these utterances accurately unless we recognize that each contributes to the achievement of a focused goal, viz. a refiJtadon. Once that intention is recognized, appropriatenc-ss and well-formodness conditions can be applied to the text; and the success of the act can he measured against the set of criteria which are relevant to refutations, including the usual felicity conditions, but also specific conditions on the production of factual evidence and the demonstration of contradiction. Following this new characterization of speech acts, yet another type can be described, operating not at the uttcrancc level, or the supra-utterancc level, but at the sub-uttcrancc level. As an illustration of the phcnomcnon involvcd, consider thc following uncxccptionable utterance: (4) 1 tom the guy at the door to watch out, but he wouldn't listen. The sccond refcrence to the guy of the first clause is made via the anaphoric pronoun he. But suppose, instead, a definite referring expression wcre used. Consider thc following: (5) I told the guy at the door to watch out, but the person wouldn't listen. The person is a distinctly odd corefcrent, and seems inappropriate 3. An examination of this context reveals that the only definite 4 referring expressions which caterer felicitously are pronomin;d epithets, such as the idiot, the fool• etc.; descriptions which can be given an interpretation as derogatives, such as the saphomore; and expressions whose literal interpretation contributes some sense of explanation to the situation being represented -- viz. thaL though warned, the guY at the door didn't heed the warning -- as in the deafmute. 113 It can be shown that the principle involved is a speech act-like phenomenon. First. it can be noted that the choice nn_.~t to use the unmarked corefercnt, he. signals that the speaker has some special intention in mind. Second. following a suggestion in Balinger (1977:7ff). it can be argued that a repeated definite description functions not only to refei" but also to characterize the referent as having the sense of the definite description. Finatly. it can be shown that all the acceptable definite descriptions in this context can be interpreted uniformly as offering an explanation 5 for the failure to listen expressed by the second clause. Note that the choice of coreferent in the case of the use of a definite referring expre~on is not. stricdy speaking, lexically governed. Furfficrmorc. the use ot` selectional features, as in Chomsky (1965) and more recent work ,on generative grammar, cannot consWaln the context for such a choice. In short, the problem is one of interpretation, and appropriateness is governed by the intention being served by the choice of the referring expression. Consider. then, an utterance such as the following: (6) [ told the guy al the door to watch out. but the idiot wouldn't listen. The difference between (4) and (6) is not me~ly one of different lexical items (he and the idiot). Rather. the use of 1he idiot makes (6) a more complex utterance than (4), involving an embedded speuch act. namely, a characterization whose purpose is to express an attitude and thereby (indirectly) offer explanation. 3. SITUATION SEMANTICS AND DISCOURSE If speech acts or speech act-h"ke phenomena are found at many levels of discourse, and if it is not possible to give a syntactic definition of a speech act, how can the notion of speech acts be integrated into a formal, and in particular, a computational analysis of discou~? The natural alternative to a syntactic definition is a semantic one 6. and the approach to se, manties which offers the greatest promise in treating discourse is the situation semantics being developed at Stanford by Jon Barwise and John Perry (c£ Bat'wise (forthcoming). P, arwise and Perry (1980), Barwise and Perry (forthcoming a). and Barwise and Perry (forthcoming, b)). Briefly, this new semantics is informed by the notion that the actual world can be thought of as cunsisting of situations, which in turn consist of objects having properties and standing in relationships. Any actual situation is far too rich in detail to be captured by any finite process, so in practice,, perceptions of situations, beliefs about situations, natural language descriptions-of situations, cte.. are actually situation-typeS, which arc partial functions characterizing various types of situattons. (Cf. Barwise (198I) for a more complete discussion of this point.) [n situation semantics, scntences do not map directly to troth-values, but rather are understood as designating situation-types. Totally understanding a statement would entail that one t: able to derive a situation-type which includes all the objects, properties, and relationships represented in the statement. A series of statements in discourse can be viewed as creating, modifying, embellishing, or manipulating sots of situation-types. Some utterances invoke situation-types: some ac~ as functions taking whole situation-types as argumcnLs. Fur example, an initial act of reference coupled with some proposition about the referent can be seen as initiating the construction of a situation-type around the referent: an act of coreference` with some promotion, can be seen as adding a new property or relationship to an individual in an existing situation-type. The discourse situation, too, can be represented as a set of situation-types. initially containing at least the speaker, the addressee, and the mutual knowledge of speaker and addressee that they arc in a discourse situation. Any utterance which occurs exploits this diacouzse situation and cannot be' interpreted independently of it. The utterance itself, however, effects a cl~ange in the discourse ~ituation. as its interpretation is added. It is in representing the effect of the utterance that the theory of speech acts has application. The dynamic proecss of diseour~¢ can bc modelled as a step by sCep modification of the discourse situation, with each step taking the set of situation-types of the discourse situation, coupled with the interpretation of the utterance, to a new set of situation-types of the diseours¢ situation. There are many interesting details to this model which must be ignored in a paper of this scope, but several ob~rvations relevant to speech acts can be made, First. this model accommodates the distinction made by most speech act th¢orist.s between what a speaker says - the locutionary act -- and what a speaker intends to communicate (or means) - the illocutionary act -/. This distinction is rcpeated and coptt)red hcre in the treatment of the actual discourse as a oair of sets of situation-types. One gives the set of situation- types of the text (written or spoken) -- s t - and can be regarded as representing the Iocutionary aspect of the act. The other gives the set of situation-types of the diseoursc situation (including author and reader or speaker and addressee) -- s d - and can be regarded as representing the state of knowledge about the discourse -- including the information revealed by infcrring the intentions of the speaker - at the time the utterance is produced. The interpretation ot` s t relative to s d, f (<s t. Sd>), giver a new set of situation-types of the diseourse situatiun. Sd'. The illocutionary act can be thought of as difference between s d' and s d. Second. this characterization of an illocutionary act is consonant with psychological features of actual discourse, in actual interaction, what the speaker says -- the Iocutionary act - is highly volatile: the exact words of an utterance more than a few seconds past may be lost forever. What remains is the effect of those words, in particular, as composed in longer- term memory. What is remembered represents the state achieved by the discourse, and that reflects directly what the addr~r, ee has inferrred about" the speaker's intentions. Put another way. what becomes stored as memory represents what the addressee inferred about what the speaker meant by his utterance. 8 Third. one con regard the problem of interpreting the current status of the diseuurse as similar to the problem of deriving the current state in a S'l'RIPS-like system (ct'. I-'ikes and Nilsson (1971)): the correct version must be the result of the application of a series of operations, in correct order, to all previous states. The current set of situation-types of the discourse situation can be seen as representing the accumulation of the effects that have resulted from a series of discrete operations. 4. OPERATIONS ON SITUATION-TYPES There are various ways that a word or phrase can count as an operation on a situation-type. For example, an utterance or part of an utterance could (a) take a whole situation-type as an argument, or (b) introduce an object and a property, or (c) intrbduec two or more objects and a relationship, or (d) introd=~-c an object or a property or a relationship into an existing situation-type. (a) would apply to phrases like by the way, anyway, etc., which have the effect of shifung focus or "clearing the slate" for a new text fragment. Cases (b) and (c) ensure that the utterance or part of utterance, it" text initial, conuLins enough information to enable a situation-type to be derived. Case (d) accounts for those instances where a situation-type is clearly established and a single word or reference can effect a change in the situation-type. For example, the name John (used constativeiy) at the beginning of an interaction cannot count as a operation on a situation-type, as no situation- type of the diseoum: text then exists, and the name John alone cannot create one. However. the name ./ok, at'ter a question, such as Who took my book~ can count as a operation, since it. together with the interpretation of the question, serves to introduce a new object and proposes into an existing situation-type. Returning to a sentence like (6) (rcpeated below), it is possible to see that, in fact. a series of operations.are involved in deriving the final situation-type of the text. (6) ! told the guy at the door to watch ouL but the idiot wouldn'! listen. The utterance corresponding to the first grammatical clause creates the situation-type in which tllcre is the guy at the door and the speaker and the relationship of the speaker having told the guy at the door to watch out. The word but can be viewed as function mapping situation-types into situatiun-types where a relationship or property somehow implicated in the first situation-type is "shown explicitly not to hold in the derived situation- type. 'llm balance of the second clause modifies the situation-type so that the guy at the door now has the property both of having been told by the 114 spcakcr to watch out. and of having not listened, manifesting thc violation of supposed normative behavior. "rhc fact that the guy at the door has been referred to as the idiot has added a further property, or characterization. The situation-type of the text at the end of the utterance of the second clause includcs the speaker with die property of having told the guy at the door to watch out and having judged him as an idiot for not listening, and the guy at the door who Ilad bccn told to watch out by the speaker but who did not listcn, and who has been judged to have behaved idiotically. (There actually are othcr relationships here. but a more eomplcto description adds nothing to the general point being illustrated.) In this case, then, there are at least three steps in thc "semantic" parsing of the utterance: thc initial creation of the situation-type (the first clause), the interpretation of but. and the modification of the initial situation-type to accommodate the information in the second clause. 5. SPEECH ACTS AS OPERATIONS ON SFI'IJATION-TYPF~S Thus far the relationship between situation-types and speech acts has not been made expliciL Recall that speech acts can be characterized acs having both an intentional component and some representation of the conditions which must be met for the speech act to have been suecessfully performed. But more importandy, a speech act is not successfully performed until the addressee recognizes that its performance was attempted: and that recognition effects a change in the relationship between the speaker and the addressee. This change in relationship can be regarded as an effect of an operation on the set of situation-types of the discourse situation (not of the text). But a speech act. even if clearly understood as intended, is not successful unless it effects specific changes in the set of situation-types of the text, as well. Therefore. speech acts can be thought of as the effects of the application of one or more inference enabling functions to the pair of sets of situation-types giving the model of the discourse (f (<s t. Sd>)). It is possible to use situation-typeS as the basis of a definition of speech acts by requin.ng that speech acts be the result of the application of an inference enabling function to an utterance in a discourse situation such that the derived situation-typc confi)rms to one of a (finite) number of speech act- types. In othcr words, fur an utterance or a series of utteranccs to count as a speech act. the utterance or utterances must minimally (i) perform an operation on a situation-type, and (ii) derive a situation-type which is defined (for speaker and addressee) as the legitimate end state of a speech act, This means that the rules governing the form of speech acts are actually rules specifying the relationships that must obtain in the situation-type which would result from the successful performance of the speech act. In short, this allows us to view speech acts as being driven by certain situation- types as goals. Simpler spcech act-types, such as performativcs, correspond neatly to various unary operations on situation-types. An asscrtion operates on the situation-type of the text by introducing objects and properties or relationships that correspond to the proposition of the assertion. But it also' introduces the speaker in an ASSERT relationship to the proposition. And given the constrainLs on truly felicitous assertions, this would also introduce the implicature that the speaker believes the proposition. In particular. following the taxonomy and characterization of illocotionary acts in II~h and Hamish (1979:39ff). an assertion has the effect, for any speaker. S. and any propositiun, P, of cre;,ting the following situation-type: s (believe. S. P) = I By accepting tile assertion -- different from accepting the truth of the assertion -- thc addressee acknowledges that the above situation-type is added to the set of situation-types giving the discourse situation. A complete deseription of the speech act-type ASSERT would consist of the fullowing set of situation-types: ASSERT P Sl: s s(t~, S. P) = I s2: s (believe. S. P) = I s 1, s 2 arc in s d' Sub-utterance speech acts can be accounted for, now. by vicwing the situation-types of the text which thcy achieve as being dependent on or coincident with the situation-types achicved by the whole of the utterance in which they are cmbeddcd. Of course, there must be an accompanying operation on the situation-type of the discourse situation rcpresenting the effect of the perceived intention to achieve the sub-uteranee speech act -- as in the marked choice of a definite referring expression instead of a simple pronoun, as in (6). Supra-utterance speech acts can also be captured in this framework. A speech act like REFUTE, for example, cannot be defined in terms of any specifiable number of steps, or any specifiable ordering of operations. Its only possible definition is in terms of a final state in which all the conditions on refutation have been satisfied. In terms of situation semantics, this corresponds to a set of situation-types -- albeit very complex -- in which all the nec~sary relationships hold. Since such complex sets of situation-types represent the accumulated effects of all the operations which have occurred, without representing the order of application of those operations, there is nothing in the definition of REFUTE that requires that a specific order of operations be carded our Someone might refi~te an argument very efficiently: someone else. only after a series of false starts or after the introduction of numerous irrelevancies. The end result would be, and should be. the same. from a speech act- theoretic point of view. This characterization of speech acts, as the end states of a derivation on a sequence of situation typeS, explains naturally some of the culture-relative characteristics of supra-utterance speech acts. To take but one example, it has been noted in Taylor (1971) that in agrarian JapaneSe society there is no notion that corresponds to NEGOTIATE. Clearly. given the manifest success of urban Japanese u} obtain lucrative foreign contracts, the absence of such a spcech act-type among rural Japanese cannot be attributed to facts of the Japanese language. What we could say, given the approach here. is that the set of situation-types which is the cnd-stata of NEGOTIATE is not. par of the inventory of distinguished speech act-types in the rural Japanese "diseou rsc dialecL" 6. SOME EXAMPI,F.S OF SPEECH ACT-TYPES The fullnwing sets of situatinn-types can serve as examples of the s~tes achieved by several simple, eonstativc speech act-types. As before, the taxonomic features are based on Bach and Harnish (1979), with speaker, S, addressee(s), A, and proposition, P. INFORM P Sl: s ~(.~.~, S, P) = 1 s2: s (believe, S, P) = 1 s3: s (believe, A, P) = 1 s 1, s 2, s 3 are in s d' REFRACT P Sl: s ~(_~t, S, P) = I s2: s (believe, S, NOT P) = 1 s3: s (belier.c, S, P) = 1 s 2 is in s d s I. s 3 are in s d' CONTRADICT P sl: s ~(.~£, S, NOT P) = 1 s2: s (belicyc, S, NOT P) = 1 s3: s (believe, A, P) = 1 s 3 is in s d and s d' s 1, s 2 are in s d' The characterization of speech acts presentod here focuses on end-state conditions, hut clearly the starting statcs (specifically. the set of situation- types of the discourse situation and of the text from which an end-state is to be achieved) also affect speech act performance. A more complete specification of the initial and final states of the discourse pair of sets of situation-types for a variety of speech act-types, involving an elaboration of the role of inference cnahling functions and other constraints on the interpretation of uRerances~ is given in L:vans (in progress). 115 FOOTNOTES: I. Work on this papcr was s,,pportcd in part by a fellowship from the Stanford Cognitive Scicnce Grou~. I am deeply indebted to Jon Bar)vise for long and patient discussions of the ideas presented here. and to Dwight Bolinger. Jerry Hobbs. John Perry. lvar Tunisson. Tom Wasow, and "Ferry Winograd fi)r valuable comments and suggestions. 1 have aJso profited from conversauons with Ray Pcrrault and the SRI T[NLUNCH discussion group on matters indircctly rclatcd co those di,~usscd here. Of course. [ alone remain respo,siblc for crrors, omissions, and other def¢icncies. 2. It has been pointed out m me by Dwight Bolinger that some utterances in Spanish in the past tense can count as direct speech act promises (e.g. Un momemo y acab~.). "['his sort of promise is similar to the English exclamation. Do,eL which can be used in sufficiently constrained contexts to effect a promise or commitment. 3. This particular examplc was first brought to my 'attention by Terry Winograd. 4. it is clear that strongly demonstrativc dcfinite referring expressions using this or that do not manifest this sort of inappropriateneSS. 5. The observation that this context seems to be servicing an explanation was first madc by John Perry in a discussion of these data. 6. Thc notion of semantics I am employing shemld be understood as including certain features usually segregated under praEmatics. 7. l[ would be outsidc thc realm of speech acts proper to consider the third horse in this scmiotic troika: what a speakcr actually achieves by his uttcrancc, i.e. bow his utterance aftL'cts the addressee - the perlncudonary effect. "['his three-way contrast was first articulated by Austin (cf. Austin ( 1%2: lOOfl)). g. Attempts to incorporate this aspect of actual discourse into models of discourse processcs are certainly not new. in artificial intelligence applications, episodic mcmoq, has been used m maintain representations of the discourse situation, as, for example, in Grosz (1977). Hobbs (1976). Mann. el al. (1977). and elscwbere. REFERENCF.S: Allen. J. (1979) .4 plan-based approach to speech act recogntion. Technical Report No. 131/79. DcpL of Computer Science. University of Toronto. Austin. J. L. (1962) How to "do things with vmrd~ Cambridge. Mass.: Harvard Univcrsity Press. and London: Clar~tdon Pres~ Bach. Kent. and Robert M. Hamish (1979) Linguistic communication and speech acts, Cambridge, Mass.: The M.I.T. Press. Barwise, Jon (1981) Some computational aspects of situation semantics, in this volume. Barwise, Jon (forthcoming) Scenes and other situations, in The Journal of Philosophy. Barwise, Jon and John Perry (1980) The situation underground, in Working Papers in $emamies. Vol. 1, Stanford University. Barwise. Jon and John Perry (forthcoming, a) Semantic innocence and uncompromising situations, in Midwest Studies in Philosophy,-& P, arwise. Jon and John Perry (forthcoming, b) Situation semantic~ Bolinger, Dwight (1977) Pronouns and repeated nouns, Bloomington, Ind.: Indiana Univcrsity Lin~mistics Club. Bruce. B. and l). Newman (1978) Interacting plans, in Cognitive Scienee~ 2, [95-233. Cohen. P. R. (1978) On knowing what to say: planning speech Technical Report No. liB. Dept. of Computer Science. University of Toronto. Cohen, P. R. and C. R. Perrault (1979) Elements of a plan based theory of ~h ac~ in Cognitive Scienc~ 3, 177-212. Chomsky, Noam (1965) Aspects of the theory of synta.x, Cambridge. Mass.: The M.I.T. Press, Dore. John (1977) Children's illncutionary acts, in R.O. Freedlc (Ext.) Discourse Production mid Comprehension. Norwood. N. J.: Ablex Publishing Corporation. 227-244. Evans, David A. (in progress) Situations and speech acts." toward a formal semantics of.discourse, Stanford University Ph.D. dissertation. Fikes. R. and N. J. Nilsson (1971) STRIPS: A new approach to the application of theorem proving to problem solving, in Artificial lntelligenc£ 2. 189-208. Grosz. Barbara (1977) The representation and use of focus in dialogue understanding. Stanford Research Institute Technical Note 151. Stanford Research lnsititum, Menlo P~rL California. Hobbs. Jerry R, (1976) A computational approach to discourse analysis. Research Raporz No 76-2. Department of Computer Scienc, e~ City College. City University of New York. Labov, William and David Fanshcl (1977) Therapeutic discourse. New York: Academic Pre~. Mann. W.. J. Moore, and J. Levin (1977) A comprehension model for human dialogue, in Proceedings of the international joint conference on artificial intelligence. Cambridge. Mass.. 77-g7. Perrault. R. C.. J. Allen. and P. R. Cohen 0978) Speech acts as a basis for undcrstanding dialogue cohcnmce, in Proceedings of the second conference ml theoretical issues m natural language proce~ng. Champaign-Urbana. IlL Scarle. John (1%9) Speech acts an essay in the philosophy of languag~ Cambridge: Cambridge University Press. Scarle, John (1975) Meaning, communication and representation" unpublished manuscript. Taylor. C. (1971) Interpretation and ~ sciences of nv~n" in The Revkn~ of Metaphy~cs. YoL 25. No. 1. 3-$1. 116
1981
27
PROBLEMS IN LOGICAL FORM Robert C. Moore SRI International, Menlo Park, CA 94025 I INTRODUCTION Decomposition of the problem of "language understanding" into manageable subproblems has always posed a major challenge to the development theories of, and systems for, natural-language processing. More or less distinct components are conventionally proposed for handling syntax, semantics, pragmatics, and inference. While disagreement exists as to what phenomena properly belong in each area, and how much or what kinds of interaction there are among these components, there is fairly widespread concurrence as to the overall organization of linguistic processing. Central to this approach is the idea that the processing of an utterance involves producing an expression or structure that is in some sense a representation of the literal meaning of the utterance. It is often maintained that understanding what an utterance literally means consists in being able to recover this representation. In philosophy and linguistics this sort of representation is usually said to display t h e ~ form of an utterance, so we will refer (somewhat loosely-~-- to the representations themselves as "logical forms," This paper surveys what we at SRI view as some of the key problems encountered in defining a system of representation for the logical forms of English sentences, and suggests possible approaches to their solution. We will first look at some general issues related to the notion of logical form, and then discuss a number of problems associated with the way information involving certain key concepts is expressed in English. Although our main concern here is with theoretical issues rather than with system performance, this paper is not merely speculative. The DIALOGIC system currently under development in the SKI Artificial Intelligence Center parses English sentences and translates them into logical forms embodying many of the ideas presented here. II THE NATURE OF LOGICAL FORM pieces of the logical form of the utterance that constitute referring expressions. Having logical forms be semantically compositional is the ultimate expression of this kind of decomposability, as it renders ev,ery well-formed subexpression a locus of meanlng--and therefore a potential locus of meanlng-dependent processing. This is probably a more telling argument for semantic composltlonality in designing language- processing systems than in analyzing human language, but it can be reasonably argued that such design principles must be followed by any system, whether natural or artificial, that has to adapt to a complex environment (see [Simon, 1969], especially Chapter 4). I Logical form, therefore, is proposed as a level of representation distinct from surface-syntactlc form, because there is apparently no direct way to semantically interpret natural language sentences in a compositional fashion. Some linguists and philosophers have challenged this assumption [Montague, 1974a] [Barwlse and Cooper, 1981], but the complexity of their proposed systems and the limited range of syntactic forms they consider leave serlous doubt that the logical-form level can be completely bypassed. 2 Beyond being co~positiouel, it is desirable--though perhaps not essential--that the meaning of a logical form also be independent of the context in which the associated utterance occurs. (The meaning of an expression in natural language, of course, is often context-dependent.) A language-processing system must eventually produce a context-independent representation of what the speaker means by an utterance because the content of the utterance will normally be subjected to further processln E after the original context has been lost. In the many cases in which the speaker's intended meaning is simply the literal meaning, a context- independent logical form would give us the representation we need. There is little doubt that some representation of this sort is required. For example, much of our general knowledge of the world is derived from simple assertions of fact in natural language, but our situation would be hopeless if, for every fact we knew, we had to remember the context in which it was obtained before we could use it appropriately. Imagine trying to decide what to do with a tax refund by having to recall whether the topic of conversation was rivers or financial institutions the first time one heard that banks were good places in which to keep money. The first question to ask is, why even have a level of logical form? After all, sentences of natural languages are themselves conveyers of meaning; that is what natural languages are for. The reason for having logical foznns is to present the literal meanings of sentences more perspicuously than do the sentences themselves. It is sometimes said that natural-language sentences do not '~ear their meanings on their sleeves"; logical forms are intended to do exactly that. From this perspective, the main desideratum for a system of logical form is that its semantics be compositional. That is, the meaning of a complex expression should depend only on the meaning of its subexpresslons. This is needed for meanlnE-dependent cou~utational processes to cope with logical forms of arbitrary complexity. If there is to be any hope of maintaining an intellectual grasp of what these processes are doing, they must be decomposable into smaller and smaller meanlng-dependent subprocesses operating on smaller and smaller meaningful pieces of a logical form. For instance, if identifying the entities referred to by an utterance is a subprocess of inferring the speaker's intentions, there must be identifiable As this example suggests, context independence is closely related to the resolution of ambiguity. For any given ambiguity, it is possible to find a case in which the information needed tO resolve it is derived from the context of an utterance. Therefore, if the meanlnEs of logical forms are to be context-lndependent, the system of logical forms must provide distinct, unambiguous representations for all possible readings of an ambiguous utterance. The question remains whether logical form should also provide ambiguous representations to handle cases in which the dlsamblguatlng information is obtained later or is simply general world knowledge. The pros and cons of such an approach are far from clear, so we will generally assume only unembIEuous logical forms. Although it is sometimes assumed that a context- independent representation of the literal meaning of a sentence can be derived by using syntactic and semantic knowledge only, some pragmatic factors must also be taken into account. To take a concrete example, suppose the request "Please llst the Nobel Prize winners in physics," is followed by the question '~dho are the Americans?" The phrase "the Americans" in the second utterance should almost certainly be interpreted as 117 referring to American winners of the Nobel Prize in physics, rather than all inhabitants or citizens of the United States, as It might be understood in isolation. If the logical form of the utterance is to reflect the intended interpretation, processes that are normally assigned to praSmatlcs must be used to derive it. One could attempt to avoid thls consequence by representing "the Americans" at the level of logical form as literally meaning all Americans, and have later pragmatic processing restrict the interpretation co American winners of the Nobel Prize in physics. There are other cases, however, for which thls sort of move is not available. Consider more carefully the adjective "American." American people could be either inhabitants or citizens of the United States; American cars could be either manufactured or driven in the United States; American food could be food produced or consumed in or prepared in a style indigenous Co the United States. In short, the meaning of "American" seems to be no more than "bearing some contextually determined relation to the United States." Thus, there is n~o deflnlte context- independent mesnlng for sentences containing modifiers llke "American." The same is true for many uses of "have," "of," possessives, locative prepositions [Herskovits, 1980] and compound nominals. The only way to hold fast to the position that the construction of loglcal-form precedes all pragmatic processing seems to be to put in "dummy'* symbols for the unknown relations: This m@y in fact be very useful in building an actual system, ~ but It is hard to imagine that such a level of representation would bear much theoretical weight. We will chum assume that a theoretically interesting level of logical form will have resolved contextually dependent definite references, as well as the ocher "local" pragmatic lndeterminacies mentioned. An important consequence of this view is that sentences per se do not have logical forms~ only sentences in context ~.~-~f we speak loosely of the logical form of a sentence, this is how It should be interpreted. If we go thls far, why not say that all pragmaClc processing Cakes place before the logical form is constructed? That is, why make any distinction at all between what the speaker intends the hearer to infer from an utterance and what the utterance literally means? There are two answers co this. The first is that, while the pragmatic factors we have introduced into the derivation of logical form so far are rather narrowly circumscribed (e.g., resolving definitely determined noun phrases), the inference of speaker intentions is completely open-ended. The problem confronting the hearer is to answer the question, 'Why would the speaker say that in this situation?" Practically any relevant knowledge chat the speaker and hearer mutually possess [Clark and Marshall, 1981] [Cohen and Perrault, 1981] may be brought to bear in answering thls question. Prom a purely ~echodologica ! standpoint, then, one would hope to define some more restricted notion of meaning as an intermediate step in developing the broader theory. Even putting aside this methodological concern, it seems doubtful chat a theory of intended meaning can be co~trucCed without a concomitant thaor¥ of literal meaning, because the latter notion appears to play an explanatory role in the former theory. Specifically, the literal meaning of an utterance is one of chose things from which hearers infer speakers" intentions. For instance, in the appropriate context, "I'm getting cold" could be a request to close a window. The only way for the hearer to understand this as a request, however, is to recover the literal content of the utterance, i.e., that the speaker is getting cold, and to infer from this chat the speaker would llke him co do something about It. In summary, the notion of logical form we wish to capture is essentially that of a representation of the "literal meaning in context" of an utterance. To facilitate further processing, it is virtually essential that the meaning of Ioglcal-form expressions be compositional and, at the same time, it is highly desirable that they be conCext-lndependenc. The latter condition requires that a system of logical form furnish distinct representations for the dlfferenc readings of ambiguous natural-language expressions. I t also requires chat some limited amount of prag~atlc processing be involved in producing those representations. Finally, we note that not all pragmatic factors in the use of language can be reflected in the logical form of an utterance, because some of those factors are dependent on information that the logical form itself provides. III FORM AND CONTENT IN KNOWLEDGE P.EP&ESENTJtTION Developing a theory of the loglcal form of English sentences is as much an exercise in knowledge representation as in linguistics, but ic differs from most work in arclficlal intelligence on knowledge representation in one key respect. Knowledge representation schemes are usually intended by their designers to be as general as possible and to avoid com~aitment to any particular concepts. The essential problem for a theory of logical form, however, is co represent specific concepts chat natural languages have special features for expressing information about. Concepts that fall in chls category include: * Events, actions, and procesmes * Time and space * Collective entities and substances * Propositional attitudes and modalltles. A theory of logical form of natural-language expressions, therefore, is primarily concerned with the content rather than the form of representation. Logic, semantic networks, frames, scripts, and production systems are all different forms of representation. But to say merely that one has adopted one of these forms is to say nothing about content, i.e., what is represented. The representation used in this paper, of course, takes a particular form (higher-order logic with intensional operators) but relatively little will be said about developing or refining chat form. Rather, we will be concerned with the question of what particular predicates, functions, operators, and the like are needed to represent the content of English expressions involving concepts in the areas listed above. This project might thus be better described as knowledge encodln 6 to distinguish It from knowledge representation, as it is usually understood in arclflcial intelligence. IV A FRAMEWORK FOR LOGICAL FORM As mentioned previously, the basic fr-mework we will use to represent the logical form of English sentences is higher-order logic (i.d., higher-order predicate calculus), augmented by intensional operators. At a purely notational level, all well-formed expressions will be in "Cambridge Polish" form, as in the programming language LZSP; thus, the logical form of "John likes Mary" will be simply (LIKE JOHN MARY). Despite our firm belief in the principle of semantic compositionaltt7, we will not attempt co give a formal semantics for the logical forms we propose. Hence, our I 1 8 • adherence Co that principle is a good-falth intention rather than a demsnstrated fact. It should be noted, though, that virtually all the kinds of lo~tcal constructs used here are drawn from more formal work of logicians and philosophers in which rigorous semantic treatments are provided. The only place in which our logical language differs sigulflcancly from more familiar syscezs is In the treatment of quantiflers. Normally the English determiners "every" and "some" are translated as logical quantlfiers that bind a single variable in an arbitrary formula. This requires using an appropriate logical connective co combine the contents of the noun phrase governed by the determiner with the contents of the rest of the sentence. Thus '~very P is q" becomes (EVERY X (IMPLIES (P X) (q X))), and "Some P is Q'* becomes (SOME X (AND (e X) (q X))) It seems somewhat inelegant to have to use different connectives to Join (P X) and (~ X) in the two cases, but semantically it works. In an extremely interesting paper, Barwise and Cooper [1981] point out (and, in fact, prove) that there are :any determiners in English for which this approach does not work. The transformations employed in standard logic co handle "every" and "some" depend on the fact that any statement about every P or some P is logically equivalent to a statement about everything or something; for example, "Some P is Q" is equivalent to "Something is P and Q." What Barwlse and Cooper show is that there is no such transformation for determiners like "msst" or "more than half." That iS, statements about most P's or more than half the P's cannot be rephrased as statements about most things or more than half of all things. Barvise and Cooper incorporate this insight into a rather elaborate system modeled after Montague's, so that, among other things, they can assign a denotation to arbitrary noun phrases out of context. Adopting a more conservative modification of standard logical notation, we will simply insist that all quantified formulas have an additional element expressing the restriction of the quantifier. '~ost P's are Q" will thus be represented by (HOST X (F X) (q X)). Following thls convention gives us a uniform treatment for determined noun phrases: "Most men are mortal" "Some man is mortal" "Every man Is mortal" "The man iS mortal" "Three men are mortal" Note that we treat (MOST X (4 X) (MORTAL X)) (SOME X (MAN X) (MORTAL X)) (EVERY X (MAN X) (MORTAL X)) (THE X (MAN X) (MORTAL X)) (3 x (HA. X) (MORTJU. X)) "the" as a quantifier, on a par wlth "some" and "every." "The" is often treated formally as an operator chat produces a complex singular term, but thls has the disadvantage of not indicating clearly the scope of the expression. A final point about our basic framework Is that most common nouns will be interpreted as relations rather than functions in logical form. That is, even If we know that a person has only one height, we will represent "John's height is 6 feet" as (HEIGE'£ JOHN (FEET 6)) rather than (EQ (HEIGHT JOHN) (FEET 6)) 5 There are two reasons for this: one is the desire for "syntactic uniformity; the other is co have a variable available for use in complex predicates. Consider "John's height is more than 5 feet and less than 6 feet." If height is a relation, we can say (THE L (HEIGHT JOHN L) (AND (GT L (FEET 5)) (LT L (FEET 6)))), whereas, if length is a function, we would say (AND (GT (HEIGHT JOHN) (FT 5)) (LT (HEIGHT JOHN) (FT 6))) The second variant may look simpler, but it has the disadvantage that (HEIGHT JOHN) appears twice. This is not only syntactically unmotivated, since "John's height" occurs only once in the original English but, what is worse, it may lead Co redundant prucasslns later on. Let us suppose Chat we want to test whether the assertion is true and that determining John's height requires some expensive operation, such as accessing an external database. To avoid doing the computation twice, the evaluation procedure must be much more complex if the second representation is used rather than the first. V EVENTS, ACTIONS, AND PROCESSES The source of many problems in this area is the question of whether the treatment of sentences that describe events ("John is going to New York") should differ in any fundamental way from that of sentences chat describe static situations (*'John is tn New York"). In a very influential paper, Davidson [ 1967] argues that, while simple predicate/argument notation, such as (LOC JOHN mY), may be adequate for the latter, event sentences require explicit reference to the event as an object. Davldson's proposal would have us represent "John is going to New York" as if It were somsthing like "There is an event wh/~h Is a going of John co New York": (soME E (EVENT E) (GO E JOHN mY)) Davidson's arguments for this analysis are that (1) many adverbial modifiers such as "quickly" are best regarded as predicates of the events and that 42) it is possible co refer to the event explicitly in subsequent discourse. ("John is going co New York. Th...~e trip will take four hours.") The problem wlth Davidson's proposal is that for sentences in which these phenomena do not arise, the representation becomes unnecessarily complex. We therefore suggest introducing an event abstraction operator, EVABS, chat will allow us to introduce event variables when we need them: (P Xl ... X.) <-> (SOME E (EVENT E) ((gVABS F) E xl ... xn)) In simple cases we can use the more straightforward form. The logical form of "John is kissing Mary" would simply be (KISS JOHN MARY). The logical form of "John is gently kissing Mary," however, would be (SOME Z (EVENT E) (AND ((EWSS KZSS) Z JoHN ~Y) (GENTLE E)))) 119 If we let EVABS apply to complex predicates (represented by LAMBDA expressions), we can handle other problems as well. Consider the sentence "Being a parent caused John's nervous breakdown." "Parent" Is a relational noun; thus, if John is a parent, he must he the parent of someone, but if John has several children we don't want to he forced into asserting chat beinS the parent of any particular one of them caused the breakdown. If we had PARENTI as the monadic properry of bein S a parent, however, we could say (SOME E (EVENT E) (Am) ((EVABS PARENTL) E JOHN) (CAUSE E "John's nervous breakdown"))) We don't need tO introduce PARENTI explicitly, however, if we simply substitute for It the expression, (LAMBDA X (SOME Y (PERSON Y) (PARENT X Y))), which would give us (SOME E (EVENT E) (AND ((EVANS (LAMBDA X (SOME Y (PERSON Y) (PARZNT x z)))) Z JOHN) (CAUSE E "John's nervous breakdown"))) Another important question is whether actions---chat is, events wlth agents--should be treated differently from events without agents and, if so, should the agent be specially indicated? The point is that, if John kissed Mary, that £s somethln S he did, but not necessarily something sh....~e did. Zt is not clear whether this distinction should be represented at the level of logical form or is rather an inference based on world knowledge.. Finally, most AS work on actions and events assumes that they can be decomposed into discrete steps, and that their effects can be defined in terms of S final state. Neither of these assumptions is appropriate for continuous processes; e.g., "The flow of water continued to flood the basement." What the logical form for such statements should look like seems co be a completely open question. 6 VI TIME AND SPACE We believe that information about time is best represented primarily by sencential operators, so that the logical form of a sentence like "John is in New York at 2:00" would be somethln S likm (AT 2:00 (LOt JOHN NY)). There are two main reasons for following chls approach. First, current time can be indicated simply by the lack of any operator; e,g. , "John owns Fido" becomes simply (OWNS JOHN FIDO)o This is especially advantageous in baslcsily static dowalns in which tlme plays a minimal role, so we do not have to put someChln S into the logical form of a sentence chat will be systemetically ignored by lower-level processing. The other advantage of this approach is that temporal operators can apply Co a whole sentence, rather than Just to a verb. For instance, in the preferred reading of "The President ha8 lived in the White House since 1800," the referent of "the President" changes with the time contexts involved in evaluatin S the truth of the sentence. The other reading can be obtained by allowing the quanclfier "the" in "the President" to assume a wider scope than that of the temporal operator. Although we do not strongly dlstlnsulsh action verbs from stative verbs semantically, there are 120 syntactic distinctions that -.,st be taken into account before tense can be mapped into time correctly. Stative verbs express present time by means of the simple present tense, while action verbs use the present progressive. Compare: John kisses Mary (normally habitual) John is kissln 8 Mary (normally present time) John owns Pido (normally present time) John is owning Fido (unacceptable) This is why (KISS JOHN MARY) represents "John is klsslns Mary," rather than "John kisses Mary," which would nor~slly receive a dispositional or habitual interpretation. What temporal operators will be needed? We will use the operator AT to assert that a certain condition holds at a certain time. PAST and FUTURE will be predicates on points in time. Sinq~le past tense statements with sCaCive verbs, such a8 "John was in New York," could mean either that John was in New York at some unspecified time In the past or at a coutexcua/ly specific time in the past: (SOME T (PAST T) (AT T (LOt JOHN NY))) (TME T (PAST T) (AT T (LOC JOHN NY))) (For the second expression to be an "official" lo~tcal- form representation, the incomplete definite reference would have to be resolved.) Simple future-tense statements with sCaCive verbs are parallel, with PUTI~ replacing PAST. Explicit temporal modifiers are generally treated as additional restrictions on the time referred to. "John was in New York on Tuesday" aright be (on at least one interpretation): (SOME T (AND (PAST T) (DURING T TUESDAY)) (AT ~ (C0C JoHN ~)))) For action verbs we get representations of tkts 8oft for past and future progressive tenses; e.g., "John was kissing Mary" becomes (THE T (PAST T) (AT T (KISS JOHN ~.lY))) When we use event abstraction to introduce individual events, the interactions with time become somewhat tricky. Since (KISS JOHN MAEY) means "John is (presently) klns£ns Mary," so must (SOME E (EVENT E) ((EVABS KZSS) E JOHN MAEY)) Since logically this formal expression means something llke "There is (presently) an event which is a kissing of Mary by John," we will interpret the prnd£caCe EVENT as being true at s particular time of the events in progress at that time. To tie all this together, "John was kissing Mary gently '' would be represmnced by (THE T (PAST T) (AT T (soME E (EVY~T E) (AND ((EVABS KISS) ~. JoHN MAltY) (GENTLE E))))) Tha major unsolved problem relecing to time se ams to be recouc-tlius statemancs chat refer co points in time with those that refer co intervals--for instance, "The colpany earned $5 m4111on in March." This csrtainIy does not moan that st every point in time during March the company earned $5 auLlliou. One could invent a repreesucaciou for sentences about intervals with no particular reletiou Co the representation for sentences about points, but then we would have the difficult task of constantly having to decide which representation is approp rlace. This Is further complicated by the fact that the same event, e. S. the American Rmvolutlon, could be viewed as dofin/J~ either a point in time or an interval, depending on the time scale being considered. 7 ("At the time of the American Revolution, France was a--'monarchy," compared wlth "During the American Revolution, England suffered a decllne in trade.") One would hope that there exist systematic relationships between statements about points in time and statements about intervals that can be exploited in developin B a logical form for tensed sentences. There is a substantial literature in philosophical logic devoted to "tense logic" [Rescher and Urquhart, 1971] [McCawley, 1981], but almost all of thls work see s: to be concerned wlth evaluating the truth of sentences at points, which, as we have seen, cannot be immediately extended to handle sentences about intervals. We include space under the same heading as tlme because a major question about space Is the extent to which Its treatment should parallel that of time. From an objective standpoint, it is often convenient to view physical space and time together as a four-dlmenslonal Euclidean space. Furthermore, there are natural- language constructions that seem best interpreted as asserting that a certain condition holds in a particular place ("In California it is legal to make a right turn on a red light"), Just as time expressions often assert that a condition holds at a particular time. The question is how far this analogy between space and time can be pushed. VlI COLLECTIVE ENTITIES AND SUBSTANCES Most representation schemes are designed to express information about such discrete, well-individuated objects as people, chairs, or books. Not all objects are so distinct, however; collections and substances seem to pose special difficulties, Collections are often indicated by conjoined noun phrases. If we say "Newell and Simon wrote Human Problem Solving," we do not mean that they each did it individually (cf. "Newell and Simon have PhDs."), rather we mean that they did it as a unit. Furthermore, if we want the treatment of this sentence to be parallel to chat of "~ulne wrote Word and Object," we need an explicit representation of the unit "Newell and Simon," so that It can play the same role the individual "~ulne" plays in the latter sentence. These considerations create difficulties in sentence interpretation because of the possibility of ambiguities between collective and distributed readings. Thus, "Newell and Simon have written many papers," might mean that individually each has written many papers or that they have jointly coauthored many papers. The problems associated with conjoined noun phrases also arise with plural noun phrases and singular noun phrases that are inherently collective. "John, Bill, Joe, and Sam," "the Jones boys," and "the Jones String Quartet" may all refer to the same collective entity, so that an adequate logical-form representation needs to treat them as much alike as possible. These iss,--S are treated in detail by Webber [1978]. The most obvious approach to handling collective entities is to treat them as sets, but standard set theory does not provide quite the right logic. The interpretation of "and" in "the Jones boys and the Smith girls" would be the union of two sets, but in "John and Mary" the interpretation would be constructing a set out of two individuals. Also, the distinction made in set theory between an individual, on one hand, and the singleton sat containing the individual, on the other, semas totally artificial in thls context. We need a "flatter" kind of structure than is provided by standard set theory. The usual formal treatment of strings is a useful model; there is no distinction made between a character and a string Just one character lens; moreover, string concatenation applies equally to strings of one character or more than one. Collective entities have these features in common with strings, but share with sets the properties of being uoordered and not having repeated elements. The set theory we propose has a set formation operator COMB Chat takes any number of arguments. The arguments of COMB may be individuals or sets of individuals, and the value of COMB is the set chat contains all the individual arguments and all the elements of the set arguments; thus, (COMB A iS C} D {E F C}) = {A S C D E F G} (The notation using braces is NOT part of the logical- form language; this example is Just an attempt to illustrate what COMB means in terms of more conventional concepts.) If A is an individual, (COMB A) is elmply A. We need one other special operator to handle definitely determined plural noun phrases, e.g., "the American ships." The problem is that in context this may refer to some particular set of American ships; hence, we need to recognize it as a definite reference that has to be resolved. Following Weber [1978], We will use the notation (SET X P) to express a predicate on sets that is satisfied by any set, all of whose members satisfy (LAMBDA X P). Then "the P's" would be the contextually determined set, all of whose members are P's: (THE S ((SET X (P X)) S) ...) It might seem that, to properly capture the meaning of plurals, we would have to limit the extension of (SET X P) to sets of two or more elements. This is not always appropriate, however. Although "There are ships in the Med," might seex to mean "The set of ships in the Med has at least two members," the question "Are there any ships in the Med?" does not mean "Does the set of ships in the Mad have at least two members?" The answer to the former question is yes, even if there is only one ship in the Mediterranean. This suggests Chat any presupposition the plural carries to the effect that more than one object is involved may be a matter of Gricean lmplicature ("If he knew there was only one, why didn't he say so?") rather than semantics. Similarly, the plural marking on verbs seams to be Just a syntactic reflex, rather than any sort of plural operator. On the latter approach we would have to take "Who killed Cock Robin?" as amblBuous between a singular and plural reading, since sinBular and plural verb forms would be semantically distinct. To illustrate the use of our notation, we will represent "Every one of the men who defeated Hannibal was brave." Since no one defeated Hannibal individually, this mast be attributed to a collection of men: (soHE T (PAST T) (AT T (EVERY X (THE S (AND ((SET Y (MAN Y)) S) (DEFEAT S HANNIBAL)) (MzMB x s)) (EEAVE x) ))) Note Chat we can replace the plural noun phrase "the men who defeated Hannibal" by the singular collective noun phrase, "the Roman army," as in "Everyone in the Romeo army was brave": (SOME T (PAST T) (AT T (EVERY X (THE S (AND (ARMY S) (ROMAN S)) (Mz~ x s)) (BRAVE X)))) 121 The only change In the logical form of'the sentence is chat IX QUESTIONS AND IMFERATIVE3 (AND ((SET Y (MAN Y)) S) (DEFEAT S ~NIBAL)) is replaced by (AND (ARMY S) (RO~.~N S)). Collective entities are not the only objects that are difficult to represent. Artificial intelligence representation schemes have notoriously shied away from mass quencitie• and substances. ([Hayes, 1978] Is a notable exception.) In a sentence like "All Eastern coal contains soma sulfur," it see,." tb•[ "coal" and "sulfur" refer to properties of samples or pieces of "stuff." We might paraphrase thls sentence as "All pieces of stuff that are Eastern coal contain soue stuff that Is sulfur." If we take this approach, then, In interpreting a sentence like "The Universe Ireland Is carrying |00,000 barrels of Saudi light crude," we need co indicate that the "piece of stuff" being described is the maximal "piece" of Saudl light crude the shlp is carrying. In other cases, substances seem to be more llke abstract individuals, e.g., "Copper is the twenty- ninth element in the periodic table." Nouns that refer Co substances can also function as do plural noun phrases in their ~eneric use: "Copper is [antelopes are] abundant in the American southwest." Vlll PROPOSITIONAL ATTITUDES AND MODALITIES Propositional attitudes and modalities are discussed together, because they are both normally treated as intensional sentential operators. For instance, to represent "John believes Chat the Fox is in Naples," we would have an operator BELIEVE that takes "John" as its first argunmnt and the representation of "The Fox is in Naples" as Its second argument. S£,,tlarly, to represent '*the Fox might be in Naples," we could apply an" operator POSSIBLE to the representation of "The Fox is in Naples." This approach works particularly well on a number of problems involving quanCifiers. For example, "John believes someone is in the basement s' possesses an ambiguity that is revealed by the two par•phrases, "John believes there is someone in the basement" and "There is someone John believes Co be in the basement." As chess paraphrases suggest, thls distinction is represented by different relative scopes of the belief operator and the existential quantifier introduced by the indefinite pronoun "someone": (BELIEVE JOHN (SOME X (PERSON X) (LOC X BASEMENT))) (SOME X (PERSON X) (BELIEVE JOHN (LOC X ~N~S~IENT))) This approach works very well up to a point, but there •re cases It does not handle. For exanple, sometimes verbs like "believe" do not take a sentenc• a• • n •rs~menc, but rather a description of a sentence, e.g., "John believes Goldbach's conjecture." TF we were to make "believe" a predicate rather than a sentence operator to handle this type of ~m?le, the elegant semantics chat has been worked ouC for "quanc£fylng In" would completely break down. Another alternative is to introduce a predicate TIUE co map s descriptio n of a sentence into • sentence that necessarily has the smse truth value. Than "John believes Coldbach's conjecture" is treated •s If It were "John belleves of Coldbach's conjecture that It is true." This is dlsc£nSulshed in ch~ usual way from "John believes that Coldbach's --~-c~nJecture (whatever It may be) is true" by reversing the scope of the description "Goldbach's conjecture" and the operator "believe." The only types of utterances we have tried Co represent in logical form to this point are assertions, but of course there are other speech acts as well. The only two ve will consider •re questions and imperatives (commands). Since performatives (promises, bets, declarations, etc.) have the •ate syntactic form •s assertions, it appears that they raise no new problems. We will also concern ourselves only wich the literal speech act expressed by an utterance. Dealing wlth indirect speech acts does noc seem to change the range of representations needed; sometimes, for example, we may simply need to represent what is literally an assertion as somachlng lnc•nded as a command. For question•, we would like to have a uniform treatment of both the yes/no and WH forms. The simplest approach is co regard the semantic content of a WH question to be a predicate whose extension is being sought. This does noc address the issue of what is a satisfactory answer to • question, but we regard that as part of the theory of speech acts proper, rather than a question of logical form. We will introduce the operator WHAT for constructlng complex set descriptions, which, for the sake of uniformity, we will give the same four-part structure ve u•e for quantlflers. The represent•tlon of '~hat American ships are in the Med?" would roughly be as follows: (WHAT X (AND (SHIP X) ~.MERICAN X)) (LOC x ~zD)) WHAT is conveniently mnemonic, since we can represent "who" as (WHAT X (PERSON X) .... ), "when" as (WHAT X (TZHZ X) .... ), and so forth. "How many" questions will be treated a• questioning the quantifier. '~lov many men •re mortal?" would be represented a• (WHAT N (Nb~mZR N) (N X (MAN X) (MOZTAL X))) Yes/no questions can be handled •s • degenerate case of WH questions by treating a proposition •s a O- ary predicate. Since the exC•ueion of •n n-sty predicate is a set of n-tuples, the extension of a proposition would be a set of 0-~uples. There is only one 0-tuple, the e~ty topis, so there •re only two po•slble s•ts of O-~uple•. Th•se are the singleto~ set containing the empty topis, and the empty set, which we can identify wlth the truth values TRUE and FALSE. The logical form of a yes/no question wlth Che proposition P as its S'mantic content would be (WHAT () TEUE P), or more simply P. With regard to imperatives, It is less clear what type of semantic object Chair content should be. We might propose that It l• a proposition, but ve then have Co account for the fact that not •ll propositions are acceptable as commands. For instance, John cannot be commanded "Bill go to New York." The respon•e that a person can only be "commanded somechlng" he has control over is not adequate, because any proposition can be converted into a command by the verb "sake"--e.g., "Make Bill So Co New York." The awkwerdnas• of the phrasing "command someone somathlng" suggests another approach. One cmmands sos'one Co d.~o something, and the thinks that are done are actions. If actions are treated as objects, we can d•flne a relation DO chat map• •n agent sad an action into a proposition (See [Moore, 1980]). "John is going Co New York" would then be represented by (DO JO~h~ (GO ~f)). Actions are nov available to be the semantic content of imperatives. The problem with this approach is that we now have to pack into actions all the semantic complexities Chat can •rise in commsnds- 122 for instance, adverbial modifiers, which we have treated above as predicates on events ("Co quickly"), quantiflers ("Go to every room in the house"), and negation ("Don't go"). A third approach, which we feel is actually the most promising, is to treat the semantic content of an imperative as being a unary predlcace. The force of an imperative 18 that the person to whom the command is directed is supposed to satisfy the predlcaCe. According to this theory the role of "make" is clear--it converts any proposition into a unary predicate. If the assertion "John Is making glll go Co NOw York" is represented as (MAKE JOHN (GO BILL MY)), we can form a unary predicate by LAMBDA abstraction: (LAMBDA X (MAKE X (GO gILL mY)), which would be the semantic content of the command "Make Bill go to New York." This approach does away wlth the problem concerning adverbial modifiers or quantlflers In commands; they can simply be part of the proposition from which the predicate is formed. A final piece of evidence favoring thls approach over a theory based on the notion of action is that some imperatives have nothing at all to do wlth actions directly. The semantic content of commands llke "Be good" or "Don't be a fool" really does seem to consist exclusively of a predicate. X CONCLUSION In a paper that covers such a wide range of disparate topics, it is hard to reach any sweeping general conclusions, but perhaps a few remarks about the nature and current status of the research program are in order. First, it should be clear from the issues discussed that at least as many problems remain in the quest for logical form as have already been resolved. Considering the amount of effort that has been expended upon natural-language semantics, this is somewhat surprising. The reason may be that relatlvely few researchers have worked in thls area for its own sake. Davldeon's ideas on action sentences, for instance, raised some very interesting points about logical form-- but the major debate Ic provoked in the philosophical llcerature was about the metaphysics of the concept of action, noc about the semantics of action sentences. Even when semantics is a major concern, as in the work of Montague, the emphasis is often on showing chat relatively well-understood subareas of semantics (e.g., quantificaclon) can be done in a parClcular way, rather than on attempting to take on really new problems. An additional difficulty is that so much work has been done in a fragmentary fashion. It is clear that the concept of action is closely related to the concept of time, but it is hard to find any work on either concept that takes the other one seriously. To build a language-processlng system or a theory of language processing, however, requires an integrated theory of logical form, not Just a set of incompatible fragmentary theories. Our conclusion, then, is chac if real progress is to be made on understanding the logical form of natural-language utterances, it must be studied in a unified way and treated as an important research problem in its own right. ACKNOWLEDGEMENTS The ideas in this paper are the collective result of the efforts of a large number of people at SRI, particularly Barbara Grosz, SCan Rosenscheln, and Gary dendrix. Jane Robinson, Jerry Hobbs, Paul Martin, and Norman Haas are chiefly responsible for the implementaClon of the DIALOGIC system, building on earlier systems co which Ann Robinson and Bill Paxcon made major contributions. This research was supported by the Defense Advanced Research Projects Agency under Contracts N00039-80-C-0645 and N00039-80-C-0575 with the Naval Electronic Systems Command. NOTES I Although our immediate aim is to construct a theory of natural-language processing rather than truth- conditional semantics, It is worth noting that a system of logical form wlth a well-deflned semantics constitutes a bridge between the two projects. If we have a processing theory that associates English sentences with their logical forms, and if those loKical forms have a truth-~ondltional semantics, then we will have specified the semantics of the English sentences as well. 2 In other papers (e.g., [Montague, 1974b]), Montague himself uses an intenslonal logic in exactly the role we propose for logical form--and for much the same reason: 'We could ... introduce the semantics of our fraKment [of English] directly; but It Is probably mere perspicuous to proceed indirectly by (I) setting up a certain simple artificial language, that of tensed Intenslonal logic, (2) giving the semantics of that language, and (3) interpreting English indirectly by showing in a rigorous way how to translate it into the artificial language. This Is the procedure we shall adopt;..." [Montague, 1974b, p.256]. 3 The DIALOGIC system does build such a representation, or at least components of one, as an intermediate step in deriving the logical form of a sentence. 4 This suggests chac our logical forms are representations of what David Kaplan, in his famous unpublished paper on demonstratives [Kaplan, 1977], calls the content of a sentence, as opposed to Its character. Kaplan introduces the content/character distinction to sort out puzzles connected wlth the use of demonstratives and Indaxlcals. He notes that there are at least two different notions of "the meaning of a sentence" that conflict when indexical expressions are used. If A says to B, "I am hungry," and g says to A, "~ am hungry," they have used the same words, but in one sense they mean different things. After all, it may be the case that what A said is true and what B said is false. If A says to g, "~ am hungry," and B says to A, "You are hungry," they have used different words, but mean the same thing, that A is hungry. This notion of "meaning different things" or "meaning the same thing" is one kind of meaning, which Kaplan calls "content." There Is another sense, though, In which A and g both use the words "I am hungry" with the same meanlng, namely, that the same rules apply to determine, in context, what content is expressed. For thls notion of meaning, Kaplan uses the term "character." Kaplan's notion, therefore, is that the rules of the language determine the character of a sentence--whlch, in turn, together wlth the context of utterance, determines the content. If ~ broaden the scope of Kaplan's theory to include the local pragmatic indetermlnacles we have discussed, it seems Chec the way they depend on context would also be part of the character of a sentence and Chat our logical form is thus a representation of the content of the sentence-ln-context. 5 It should be obvious from the example that nouns referring to unlCs of measure--e.g., "feet"--are an exception co the general rule. We treat types of quanCitles, such as distance, weight, volume, time 123 duracioo, etc., as basic conceptual categories. Following Hayes [1979], unlCs such as feet, pounds, gallons, and hours are considered to be functions from numbers,to quantities. Thus (FEET 3) and (YARDS l) denote the same distance. Halations llke length, weight, size, and duration hold between an entity and a quantity of an appropriate type. Where a word llke "welghc" serves in English to refer co both the relaClon and the quantity, we must be careful Co dlsClngulsh between chem. To see the dlscincCion, note Chac length, beam, and draft are all relaclons between a ship and a quanClcy of the same type, discance. We treat comparatives llke "greater than" as molcidomain relaclons, working with any two quanciCles of the same type (or wich pure numbers, for chac matter). 6 Hendrix [1973], Rieger [1975], Hayes [1978], and McDermott [1981] have all dealt with conClnuous processes co some extent, buc none of them has considered specifically how language expresses information about processes. 7 This point was impressed upon me by Pat Hayes. REFERENCES Barwise, J. and R. Cooper [1981] "Generalized Quantifiers and Natural Language," Lln~ulsClcs an.~d Philosophy, Vol. 4, No. 2, pp. 159-219 (1981). Clark, H. and C. Marshall [1981] "DeflnlCe Reference and Mutual Knowledge," in Elements of Discourse Understanding: Proceedings of E Workshop o~n Com~utaClonal Aspects of Lin~ulstlc Structure and Discourse SeCtin~, A. K. Joshi, L A. Sag, and B. L. Webber, ads. (Cambridge Unlversicy Press, Cambridge, England, 1981). Cohen, P. and C.R. Perraulc [1981] "Inaccurate Reference," in Elements of Discourse Understanding: Proceedln~s of ~ Workshop on Computational Aspects of Linguistic Structure and Discourse Setting, A. K. Joshi, I.A. Sag, and 8. L. Webber, eds. (Cambridge University Press, Cambridge, England, 1981). Davidson, D. [1967] "The Logical Form of Acclon Sentences," in The Lo61C of Decision and Action, N. Rescher, ed., pp. 81-95 (University of Pittsburgh Press, Pittsburgh, Pennsylvania, 1967). Hayes, P.J. [1978] "Naive Physics: Ontology of Liquids," Workin~ Papers, InsclCute of Semantic and Cognitive Studies, Geneva, Switzerland, (August 1978). Hayes, P. J. [1979] "The Naive Physics Manifesto," in Expert S~scems in the Micro-electronic A~e, D. Michle, ed., pp. 242-270 (Edinburgh Universlcy Press, Edinburgh, Scotland, 1979). Hendrix, G. [1973] '~odellng Slmulcaneoue Actions and Conclnuous Processes," Arciflclal InCelllgence, Vol. 4, Nos. 3, 4, pp. 145-180 (Winter 1973). HerskoviCs, A. [ 1980] "On the Spatial Uses of Prepositions," in Proceedlnss of the 18th Annual Meecln~ of the Association for Computational 124 Linaulsclcs , Universlcy of Pennsylvania, Philadelphia, Pennsylvania, pp. i-5 (19-22 June 1980). Kaplan, D. [1977] "DemonsCratlves, An Essay on the SemonCics, Logic, HeCaphysics and EpisCemology of DemonsCratlves and OCher Indexlcals," unpublished manuscrlpc (March 1977). McCawley, J. D. [1981] Everything chac Llnsuiscs Have AlwaTs Wanted co Know AbouC~bu...~CWere Ashamed to Ask (UnlverslCy of Chicago Press, Chicago, Illinois, 1981). MoDermocc, D. V. [1981] "A Temporal Logic for Reasoning about Processes and Plans," keearch Keporc 196, Yale University, Department of CompuCer Science, New Haven, Connecticut (March 1981). Moncague, R. [1974a] "English as a Formal Language," in Formal Philosophy, Selected Papers of Richard MoncaSue, R. H. Thomason, ed., pp. 18~21 ('-~al~ University Press, New Haven, Connecticut, and London, England, 1974). Moncague, R. [1974b] 'The Proper Tree--nO of quanclficaclon in Ordinary English," in Formal Philosophy, Selected Papers of Richard Moncasue . R. H. Thomaaon, ed., pp. 188-22i (Yale Unlversicy Press, New Haven, ConnecclcuC, and London, England, 1974). Moore, R. C. [1980] "Rmaeon£ng About Knowledge and Action," Artificial Intelligence CanCer Technical Note 191, SRI International, Menlo Park, Califor~La (October 1980). Heather, N. and A. Urquharc, [1971] Temporal Losic (Springer-Verlag, Vienna, Austria, 1971). Rieger, C. [1975] "The Coumonseuse AlgorlCha as a Basis for Computer Models of Human MemorT, Inference, Belief and Contextual Language Comprehension," in Proceedln~s, Theoreclcal Issues in Natural Language Processing, Cambridge, Massachusetts, pp. 180-195 (LO-13 June 1975). Simon, H. A. [1969] The Sciences of the Artificial (The HIT Press, Cambridge, MassJ":huxCCs, 1969). Webber, B. L. [1978] "A Formal Approach co Discourse Anaphora," Haporc No. 3761, Bole hranek and Newman, Inc., Cambridge, Massachusetts (May 1978).
1981
28
0.0 INTRODUCTION A CASE FOR RULE-DRIVEN SEMANTIC PROCESSING Marcha Palmer Department of Computer and Information Science University of Pennsylanla The primary cask of semantic processing is to provide an appropriate mapping between the synCactlc consClCuanCs of a parsed sentence and the arguments of the semanclc predlcaces implied by the verb. This is known as the Alignment Problem.[Levln] Sectloo One of thls paper gives an overview of a generally accepted approach to semantic processing that goes through several levels of representation to achieve this mapping. Although somewhat inflexible and cumbersome, the different levels succeed in preservln S the context sensitive information provided by verb semantics. Section Two presents the author's rule-driven approach which is more uniform and flexible yet still accommodates context senslClve constraints. This approach is based on general underlying principles for syntactic methods of Incroduclns semantic arguments and has interesting implications for linguistic theories about case. These implications are dicuesed in Section Three. A system that implements this approach has been designed for and tested on pulley problem statements gathered from several physics text books.[Palmer] 1.0 MULTI-STAGE SEMANTIC ANALYSIS A popular approach [Woods], [Simmons], [Novak] for assisnlng semantic roles Co syntactic coosClcueoCs can be described with three levels of representation - a schema level, a canonical level, and a predicate level. These levels are used to bridge the gap between the surface syncactlc representation and the "deep" conceptual represeoCatlon necessary for communicating wlth the Incernal database. While the following description of these levels may not correspond to any one Implementaclon in particular, It will give the flavor of the overall approach. I.i Schema Level The first level corresponds to the possible surface order configurations a verb can appear in. In a domain of equilibrium problems the sentence "A rope supports one end of a scaffold." could match a schema like "<physobJ> SUPPORTS <locpart> of <physobJ>". The word ordering here implies chec the first <physobJ> is the SUBJ and the <locpart> is the OBJ. Other likely schemes for sentences involving the SUPPORT verbs are "<physobJ> SUPPORTS <physobJ> AT <locpart>," "<physobJ> SUPPORTS <force>," "<physobJ> IS SUPPORTED," sod "<locpart> IS SUPPORTED."[Novak] Once a particular sentence has marched a schema, it is useful to rephrase the information in a more "canonical" form, so Chac a single of inference rules can apply Co a group of schemas. 1.2 Canonical Level This intermediate level of representation usually consists of the verb itself, (or perhaps a more primitive semantic predicate chosen to represent the verb) and a list of possible roles, e.g. arguments to the predicate. These roles correspond loosely to a union of the various semantic types indicated in the schemas. The schemas above could all easily map into: SUPPORTS(<physobJ>l,<physobJ>2, <Iocpart>,<force>). The "canonical" verb representation found at this level bears certain similarities to a standard verb case frame, [Simmons, Bruce] in the roles played by the arguments to that predicate. There has been some controversy over whether or not any benefits are gained by labeling these arguments "cases" and aCtempting to apply linguistic generalities about case. [Fillmore] The possible benefits do not seem to have been realized, wlth a resulting shift away from explicit ties to case in recent work. [Charnlak], [Wilks] 1.3 Predicate Level However, the implied relationships between the arguments still have to be spelled out, and thls is the function of our third and final level of representation. This level necessarily makes use of predicates chat can be found in the data base, and for the purposes of the program is effectively s "deep" semanclc representaClon. A verb such as SUPPORT would require several predicates in an equilibrium domain. For example, the "scaffold" sentence above could result in the followln S llst corresponding Co the general predlcaCes listed immediately below. "Scaffold" Example SUPPORT(rope,scaffold) UP(Fl,rope) UOWN(F2,scaffold) CONTACT(rope,scaffold) LOCPT(rtendl,rope) LOCPT(rtend2,scaffold) SAMEPLACE(rceodl,rtend2) General Predicates SUPPORT(<physobJ>l,<physobJ>2) UP(<force>l,<physobJ>1) OOWN(<force>2,<physobj>2) CONTACT(<physobJ>l,<physobJ>2) LOCPT(<locparc>l,<physobJ>l) LOCPT(<locpart>2,<physobJ>2) SAMEPLACE(<locpart>t,<locpert>2) 125 Producing the above list requires common sense deductions [Bundyl about the existence of objects fllllng arguments chat do not correspond directly Co the canonical arguments, i.e. the two <locpt>s, and any arguments that were missing from the explicit seuteoce. For instance, in our scaffold example, no <force> was mentioned, and must be inferred. The usefulness of the canonical form is illustrated here, as It prevents tedious duplication of inference rules for slightly varying schemas. The relevant information from the sentence has now been expressed in a form compatible wlth some internal database. The goal of thls semantic analysis has been to provide a mapping between the original syntactic constituents and the predicate arguments in the final representation. For our scaffold example the following mapping has been achieved. The filling in of gaps in the final representation, although motivated by the needs of the database, also serves to rest and expand the mapping of the syntactic constituents. SUBJ <- rope <physobJ>l OBJ <- end <pbysobJ>2 OFPP<- scaffold <locpart>2 An obvious question at this point is whether or not the mappings from syntactic constituents to predicate arguments can be achieved directly, since the above multi-stage approach has at least three major disadvantages: 1) It is tedious for the programmer co produce the original schemas, and the resulting amount of special purpose code is cumbersome. It is difficult . for the programmer to guarantee that all schemas have been accounted for. 2) This type of system is not very robust. A schema that has been left out simply cannot be matched no matter how much it has in common with stored schemas. 3) Because of the inflexibility of the system It is frequently desirable co add new Informaclon. Adding Just one schema, much less an entire verb, can be clme consuming. How much of a hindrance thls will be is dependent on the extent Co which the semantic information has been embedded in the code. The LUNAR project's use of a meanlns representation language greatly increased the efficiency of adding new information. The following section presents a system thaC uses syntactic cues at the semantic predicate level to find mappings directly. This method has Inceresclng implications for theories about cases. 2.0 RULE-DRIVEN SEMANTIC ANALYSIS This section presents a system for semantic processin S that maps syntactic constituents directly onto the arguments of the semantic predicates suggested by the verb. In order Co make these assignments, the possible syntactic mappings must be associated with each argument place in the original semantic predicates. For instance, the only possible syntactic constituent that can be assigned to the <physobJ>1 place of a SUPPORT predicate is the SUBJ, and a <physobJ>2 can only be filled by an OBJ. But a <locpart> might be an OBJ or the object of an AT preposition, as in "The scaffold iS supported at one end." (The scaffold in this example is the syntactic subject of a passive sentence, so iC is also considered the logical object. For our purposes we will look on it as an OBJ). It might seem at first glance chat we would want to allow our <physobJ>2 to be the object of ao OF preposition, as in "The rope supports one end of the scaffold." But that is only true if the OFPP follows something llke a <locpart> which can be an OgJ in a sentence about SUPPORT. (Of course, Just any OPPP will not supply a <physobJ>2. In "The rope supports the end of greatest weight.", the object of the OPPP Is not a <physobJ> so could not satisfy <physobJ>2. The <physobJ>2 in thls case must be provided by the previous context.) It is this very dependency on the existence of other spmcific types of syntactic constituents chat was captured by the schemas mentioned above. It is necessary for an alternative system to also handle context sensitive constraints. 2.1 Decision Trees The three levels of representation mentioned in Section One can be viewed as the bottom, middle and top of a tree. SUPPOET(p I ,p2) CONTACT(pl,p2) LOCPT (lpC 1 ,pl ) LOCPT(lpc2,p2) I J I SUPPORT(p I, p2, lpt, force) / I \ / J \ I SUBJ OBJ OPPP <physobJ> SUPPORTS <locpart> OF <physobJ> "The rope supports one and of the scaffold." 126 The inference rules that link the three levels deal mainly with any necessary renaming of the role an argument plays. The SUBJ of the schema level is renamed <physobJ>1 or pl at the canonical level, and is still pl at the predicate level. One way of viewing the schemas is as leaf nodes produced by a decision tree that starts at the predicate level. The levels of the tree correspond to the different syntactic constituents that can map onto the arguments of the original set of predicates. Since more than one argument can be renamed as a particular syntactic constituent, there can be more than one branch at each level. If a semantic argument might not be mentioned explicitly in the syntactic configuration, this also has to be expressed as a rule, ex. pl -> NULL. (Ex. "The scaffold is supported.") When all of the branches have been taken, each terminal node represents the set of decisions corresponding to a particular schema. (See Appendix A.) Note that the canonical level never has co be expressed explicitly. By working top down instead of bottom up unnecessary duplication of inference rules iS automatically avoided. The information in the original three levels can be stored equivalently as the top node of the decision tree along with the renaming rules for the semantic arguments (rewrite rules). This would reverse the order of analysis from the bottom-up mode suggested in section one to a cop-down mode. This uses a more compact representation, but would be computationally less efficient. Growing the entire decision tree every time a sentence needed to be matched would be quite cumbersome. However, if only the path to the correct terminal node needed to be generated, this approach would be computatlonally competitive. By ordering the decisions according to syntactic precedence, and by using the data from the sentence in question to prune the tree WHILE it is being generated, the correct decisions can usuallly be made, with the only path explored being the path to the correct schema. 2.2 Context Sensitive Constraints Context sensltivity can be preserved by only allowing the p2->OPPP rule to apply after a mappin S for Iptl has been found, evidence that an Iptl->OBJ rule could have already applied. To test whether such a mapping has been made given a LOCPT predicate, it is only necessary tO see if the iptl argument has been renamed by a syntactic constituent. The renaming process can be thought of as an instantlatlon of typed variables, - the semantic arguments by syntactic constituents. [Palmer, Galller, and Welner] Then the following preconditions must be satisfied before applying the p2->OFPP rule: ( /\ stands for AND) p2->OFPP/ LOCPT(Iptl,p2) /\ not(varlable(iptl)) These preconditions will still need to be satisfied when a LOCPT predicate is part of another verb representation. Anytime a <locpart> is mentioned It can be followed by an OFPP introducing the <physobJ> of which It is a location part. This relationship between a <locparc> and a <physobJ> is Just as valid when the verb is "hang" or "connect." Ex. "The pulley is connected to the right end of the string." " The particle is hung from the right end of the string." These particular constraints are general to the domain rather than being restricted to "support'. This illustetes the efflclency of associating constraints with semantic predicates rather than verbs, allowing for more advantage to be taken of generalities. There is an obvious resemblance here to the notation used for Local Constraints grammars [Joshi and Levy]: p2->OFPP/ DOM(LOCPT) /\ LMS(Iptl) /\ not(var(iptl)) DOM - DOMinate, LMS - Left Most Sister It can be demonstrated that the context sensitive constraints presented here are a simple special case of their Local Constraints, since the dominating node is limited to being the immediate predicate head. Whether or not such a restricted local context will prove sufficient for more complex domains remains to be proven. 2.3 Overview As illustrated above, our mappings from syntactic constituents to semantic arguments can be found directly, thus gaining flexibility and uniformity without losing context sensitivity. Once the verb has been recognized, the semantic predicates representing the verb can drive the selection of renaming rules directly, avoiding the necessity of an intermediate level of representation. The contextual dependencies originally captured by the schemes are preserved in preconditions that are associated with the application of the renaming rules. Since the renaming rules and the preconditions refer only to semantic predicates and arguments to the predicates, there is a sense in which they are independent of individual verbs. By applying only those rules that are relevant to the sentence in question, the correct mappings can be found quickly and efficiently. The resulting system is highly flexible, since the same predicates are used in the representation of all the verbs, and many of the preconditions are general to the domain. This facillitates the addition of similar verbs since most of the necessary semantic predicates with the appropriate renaming rules will already be present. 127 3.0 THE ROLE OF CASE INFORMATION Although the canonical level has often been viewed as the case frame level, doing away with the canonical level does not necessarily imply chat cases are no longer relevant to semantic processing. On the contrary, the importance here of syntacclc cues for introducing semantic arguments places even more emphasis on the traditional noclon of case. The suggestion is chat the appropriate level for case information is in fact the predicate level, and that most cradlClonal cases should be seen as arguments to clearly defined semantic predicates. These predicates are no~ merely the simple set of flat predicates indicated in the previous sections. There is an implicit structurihg to chat set of predicates indicated by the implications holding between them. A SUPPORT relationship implies the existence of UP and DOWN forces and a CONTACT relationship. A CONTACT relationship implies the existence of LOCPT's and a SAMEPLACE relationship between them. The set of predicates describing "support" can be produced by expanding the implications of the SUPPORT(pl,p2) predicate into UP(fl,pl) and DOWN(f2,p2) and CONTACT(pl,p2). CONTACT(pl,p2) is in turn expanded into LOCPT(Iptl,pl) and LOCPT(ipt2,p2) and SAMEPLACE(IpI,Ipt2). These deflniclons, or expansions, are represented as the following rewrite rules: supporc<->SUPPORT(pl,p2) SUPPORT(pl,p2)<-> UP(fI,pI)/\DOWN(f2,p2) /\CONTACT(pl,p2) CONTACT(pl,p2)<-> LOCPT(IpCI,pI)/\LOCPT(Ipt2,p2) /\SAMEPLACE(pl,p2) When "support" has been recognized as the verb, these rules can be applied, to build up the set of semantic predicates needed to represent support. If there were expansions for UP and DOWN they could be applied as well. As the rules are being applied the mappings of syntactic constituents to predicate arguments can be made at the same time, as each argument is introduced. The case information is not merely the set of semantic predicates or Just the SUPPORT(pI,p2) predicate alone. Rather, the case information is represented by the set of predicates, the dependencies indicated by the expansions for the predicates, and the renaming rules that arm needed to fled the appropriate mappings. The renaming rules correspond to the traditional syntactic cues for introducing particular cases. They are further restricted by being associated wlth the predicate context of an argument rather than the argument in IsolaClon. When this structured case information is used to drive semantic processing, It is not a passive frame that waits for its slots to be filled, but rather an active structure that goes in search of fillers for its- arguments. If these Instantiatlons are sot indicated explicitly by syntax, they must be inferred from a world model. The following example illustrates how the acClve case structure can also supply cases not mentioned explicitly in the sentence. 3.1 Example Given a pair of sentences like "Two men are lifting a dresser. A rope supports the end of greatest weight." we will assume that the first sentence has already been processed. Having recognized that the verb of the second sentence is "support', the appropriate expansion can be applied co produce: SUPPORT(rope,p2) This would in turn be expanded to: UP(fl,rope) DOWN(f2,p2) CONTACT(rope,p2) In expanding the CONTACT relationship, an 1ptl for "rope" and a p2 for "end" need co be found. (See Section Two) Since the sentence does not supply an ATPP that might introduce an lpcl for the "rope" and since there are no more expansions that can be applied, a plausible inference must be made. The lptl is likely co be an endpoinc Chat is not already in contact with something else.This implicit object corresponding to the free end of the rope cam be name "ropend2." The p2 is more difficult. The OFPP dome ant introduce a cphyaobJ>, although It does specify the "and" more precisely. The "end" must first be recognized as belonging Co the dresser, and then as being its heaviest end, "dresserend2." This is really an anaphora problem chat cannot be decided by the verb, and could in fact have already been handled. Given "dreseerend2", it only remains for the "dresser" Co be inferred as the p2 of the LOCPT relationship, using the same principles that allow an OFPP to introduce a p2. The final set of predicates would be SUPPORT(rope,dresser) /1\ /1\ / I \ UP(fl,rope) ] DOWN(f2,dreeser) I CONTACT(rope,dresser) /1\ /l\ / I \ LOCPT(ropend2,rope)LOCPT(dreeserend2,dresser) I [ SAMZPLACE(ropmud2,dresserend2) Both the ropeod2 and "dresser" were supplied by plausible reasoning using the context and a world model. There are always many inferences that can be drawn when processing a single sentence. The detailed nature of the case structure presented above gives one method of regulating ~hls inferencing. 128 3.2 Associations wlt____~h llnsulstlcs A recent trend in linguistics co consider cases as &rguments to thematic relations offers a surprising amount of support for this position. Without denying the extremely useful tles between syntactic constltuencs sod semantic cases, Jackendoff questions the abillcy of case to capture complex semantic relationships. [Jackendoff] His main objection is that standard case theory does not allow a noun phrase to be assigned more than one case. In examples llke "Esau traded hls birthright (to Jacob) for a mess of pottage," Jackendoff sees two related actions: "The first is the change of hands of the birthright from Esau to Jacob. The direct object is Theme, the sub~ect is Source, and the to-object is Goal. Also there is what I will call the secondary action, the changlnS of hands of the mess of pottage in the ocher direction. In this action, the for-phrase is Secondary Theme, the subject is Secondary Goal, and the to-phrase is Secondary Source." [p.35] This, of course, could not be captured by a Fillmore-llke case frame. Jackendoff concludes that, "A theory of case grammar in which each noun phrase has exactly one semantic function in deep structure cannot provide deep structures which satisfy the stron S Katz-Postal Hypothesis, that is, which provide all semantic information about the sentence." Jackendoff is sot completely dlscardln E case information, but rather suggesting a new level of semantic representation that tries to incorporate some of the advantages of case. Making constructive use of Gruber's system of thematic relationships [Gruher], Jackendoff postulates "The thematic relations can now be defined in terms of [these] semantic subfunctlons. Agent is the argument of CAUSE chat is an individual; Theme Is the argument of CHANGE that is an individual; Source and Goal are the initial and final state arguments of CHANGE. Location will be defined in terms of a further semanclc function BE thac takes an individual (the Theme) and a state (the Locatlon). [p.39] Indeed, Jackendoff is one example of a trend noted by Janet Fodor She points out chat "it may be more revealing to regard the noun phrases which are associated in a variety of case relations with the LEXICAL verb as the arguments of the primitive SEMANTIC predicates into which It is analyzed. These semantic predicates typically have very few arguments, perhaps three at the most, but there are a lot of them and hence there will be a lot of distinguishable "case caCesorles.'(Those which Fillmore has identified appear to be those associated wlth semantic components that are particularly frequent or prominent, such as CAUSE, USE, BECOME, AT.)" [p.93] Fodor summarizes with, "As a contribution CO semantics, therefore, it seems best to regard Fillmore's analyses as merely scepplng stones on the way Co a more complete specification of the meanings of verbs." The one loose end in thls neat summation of case is its relation to syntax. Fodor conclnues, "Whether there are any SYNTACTIC properties of case categories that Fillmore's theory predicts but which are missed by the semantic approach is another question...." It Is the thesis of thls paper that these synCactlc properties of case categories are the very cues that are used to drive the filling of semantic arguments by syntactic constituents. Thls system also allows the same syntactic constituent to flll more than one argument, e.g. case category. The following section presents further evidence chat thls system could have direct implications for linguistic theories about case. Although it may at first seem that the analysis of the INSTRUMENT case contradicts certain assumptions that have been made, it actually serves to preserve a useful disctinction between marked end unmarked INSTRUMENTS. 3.3 The INSTRUMENT Case The cases necessary for "support" were all accomodated as arguments to semantic primitives. Thls does not imply, however, that cases can never play a more important role In the semantic representation. It is possible for a case to have Its own expansion which contains information about how semantic predicates should be structured. There is quite convincing evidence in the pulley domain for the influential effect of one particular case, In thls domaln INSTRUMENTS are essentially "intermediaries" in "hang" and "connect" relationships. An <inter>medlary is a flexible llne segment that effects a LOCATION or CONTACT relationship respectively between two physical objects. Example sentences are "A particle is hung by a string from a pulley," and "A particle is connected to another particle by a string." The following rewrite rules ere the expansions for the "hang" and "connect ° verbs, where the EFFECT predicate wlll have Its own expansion corresponding to the definition of an intermediary. han S <-> EFFECT(lnter,LOCATION(pI,Ioc)) connect <-> EFFECT(Inter,CONTACT(pI,F2)) Application of these rules repectlvely results in the following representation for the example sentences: EFFECT(string,LOCATION(perticlel,pulleyl)) EFFECT(strlng,CONTACT(parrlclel,psrtlcle2)) 129 The expansion of EFFECT itself is: EFFECT(inter, REL(argl,arg2)) <-> REL(argl,inter), REL(inter,arg2)) where REL stands for any semantic predicate. The application of this expansion to the above representations results in: LOCATION(particlel,string) LOCATION(strlng,pullayl) and CONTACT(particlel,strins) CONTACT(sCrlng,partlcla2) These predicates can then be expanded, with LOCATION bringing in SUPPORT and CONTACT, and CONTACT bringing in LOCPT. 3.4 Possible Implications There seams to be a direct connection between the previous expansion of intermediary and the analysis of the INSTRUMENT case done by Beth Levln at MIT.[Levln] She pointed out a distinct difference in the use of the same INSTRUMENT in the following two sentences: "John cut his foot with a rock." "John cut his foot on a rock." In the first sentence there is an implication that John was in some way "controlling" the cutting of his foot, and using the rock to do so. In the second sentence there is no such implication, and John probably cut his foot accidentally. The use of the "with" preposition marks the rock as an INSTRUMENT. that is being manipulated by John, whereas "on" introduces an unmarked INSTRUMENT with no implied ralationshion to John. It would seem that something llke the expansion for EFFECT could help to capture part of what is being implied by the "control" relationship. Bringing in the transitivity relationship makes explicit a connection between John and the rock as well as between the foot and the rock. ~n the second sentence only the connection between the foot and the rock is implied. The connection implied here is certainly more complicated than a simple CONTACT relationship, and would neccsssitate a more detailed understanding of "cut." But the suggestion of "control" is at least indicated by the embedding of the CUT predicate within EFFECT and CAUSE. CAUSE(John,EFFECT(rock,(CUT(foot-of-John))) The tie between the AGENT and the INSTRUMENT is another implication of "control" that should be explored. That the distinction between marked and unmarked INSTRUMENTS can be captured by the EFFECT relationship is illustrated by the processing of the following two sentences: "The particle is hung from a pulley by a string." "The particle is hung on a string." In the first sentence an "inter" (a marked INSTRUMENT) is supplied by the BYPP, and the following representation is produced: EFFECT(string,LOCATION(partlcle,pulley)) In the second sentence no "inter" is found, and in the absence of an "inter" the EFFECT relationship cannot be expanded. The LOCATION(particle,strlng) predicate is left to stand alone and is in turn expanded. (The ONPP can indicate a "lot. °) The intriguing possibility of verb independent definitions for cases requires much more exploration. [Charniak] The suggestion here is that a deeper level of representation, the predicate level, is appropriate for investigating case implications, and that important cases llke AGENTS and INSTRUMENTS have implications for mats-level structuring of those predicates. 3.5 Summary In summary, there is a surprising amount of information at the semantic predicate level that allows syntactic constituents to be mapped directly onto semantic arguments. This results in a semantic processer that has the advantage of being easy to build and more flexible than existing processers. It also brings to light substantial evidence that cases should not be discarded but should be reexamined with respect to the roles they play as arguments to semantic predicates. The INTERMEDIAKY case is seen to play a particularly important role having to do not with any particular semantic predicate, but with the choice of semantic predicates in general. References [I] Bruce, B., Case system for natural language, "Artificial Intelligence," Vol. 6, No. 4, Winter, pp. 327-360. [2] Bundy, et-al, Solving Mechanics Problems Using Mats-Level Inference, Expert Systems i_.~n the Micro-Electronic ARe, Michia, D.(ed), Edinburgh University Press, Edinburgh, U.K., 1979. [3] Charnlak, E., A brief on case, Working Paper No.22, (Castagnola: ~nstitute for Semantics and Cognitive Studies), L975. {4] Fillmore, C., The case for case, Universals In Linguistic Theory, Bach and Harms (eds.) New York; Holt, Rinehart and Winston, pp. 1-88. [5] Fodor, Janet D., Semantics: Theories of Meanin~ in Generative Grammar, Language and Thought Series. Thomas Y. Crowell Co., Inc., 1977, p. 93 130 [6] Gruber, Syntax and Co., 1976. J.S., Lexlcal Structures in Semantics, North-Holland Pub. [7] Jackendoff, R.S., Semantic Interpreter i_nn Generative Grammar , HIT Press, Cambridge, MA, 1972, p. 39. [8] Levln, B. "Instrumental With and the Control Relation in English," HIT Master*s Thesis, 1979. [9] Novak, G.S., Computer Understanding of Physics Problems Stated in Natural Language,Amerlcan Journal of CompuCatlonal Linguistics, Microfiche 53, 1976. [I0] Palmer, M., Where to Connect? Solving Problems in Semantics, DAI Working Paper No. 22, University of Edinburgh, July 1977. [11] Palmer, M., "Driving Semantics for a Limited Domain," Ph.D. Thesis, forthcoming, University of Edinburgh. [12] Palmer, H., Galller, J., and Weiner, J., Implementations as Program Specifications: A Semantic Processer in Prolog, (submitted IJCAI, Vancouver, August 1981). [13] Simmons, R.F., Semantic Networks: Their Computation and Use for Understanding English Sentences, Computer Models of Thought and Language, Schank and Colby (eds.) San Francisco: W.H. Freeman and Co., 1973. [14] Wilks, Y., Processing Case, "American Journal of Computational Linguistics," 1976. [15] Woods, W.A., Semantics and Quantification in Natural Language Question Answering, BEN Report 3687, Cambridge, Mass, November 1977. APPENDIX A / p2 -> OgJ/ / SUPPORT(SUBJ,OBJ) /\ CONTACT(SUBJ,OBJ) /\ LOCPT(IptI,SUBJ) /\ LOCPT(!pt2,OBJ) l ipc2 -> ATPP i SUPPORT(SUBJ,OBJ) /\ CONTACT(SUBJ,OBJ) /\ LOCPT(IptI,SUBJ) /\ LOCPT(ATPP,OBJ) I I SUBJ I SUPPORT(pl,p2) /\ CONTACT(pI,p2) /\ LOCPT(lpCl,pI) /\ LOCPT(lpc2,p2) / pl -> SUBJ / / SUPPORT(SUBJ,p2) /\ CONTACT(SUBJ,p2) /\ LOCPT(IptI,SUBJ) /\ LOCPT(Ipt2,p2) \ \ lpt2 -> OBJ \ SUPPORT(SUBJ,p2) /\ CONTACT(SUBJ,p2) /\ LOCPT(lptl,SUBJ) /\ LOCPT(OBJ,p2) \ \ p2 -> OFPP \ SUPPORT(SUBJ,OFPP) /\ CONTACT(SUBJ,OPPP) /\ LOCPT(1ptl,SUBJ) /\ LOCPT(OBJ,OPPP) \ \ OBJ ATPP \ <physobj> SUPPORTS <physobJ> AT <locpart> \ \ SUBJ \ OBJ OFPP <physobJ> SUPPORTS <locparC> OF <physobJ> \ \ pl -> NULL \ SUPPORT(pl,p2) /\ CONTACT(pI,p2) /\ LOCPT(lptl,pl) /\ LOCPT(lpC2,p2) / \ / \ etc. etc. 131
1981
29
Corepresentational Grammar and Parsing English Comparatives Karen P#an University of )linnesota SEC. 1 INTRODUCTION SEC. 3 COREPRESENTATIONAL GRAMMAR (CORG) Marcus [3] notes that the syntax of English comparative constructions is highly complex, and claims that both syntactic end semantic information must be available for them to be parsed. This paper argues that comparatives can be structurally analyzed on the basis of syntactic information alone via a strictly surface-based grammar. Such a grammar is given in Ryan [5], based on the co- representational model of Kac Ill. While the grammar does not define a parsing algorithm per se, it nonethe- less expresses regularities of surface organization and its relationship to semantic interpretation that an ade- quate parser would be expected to incorporate. This paper will discuss four problem areas in the description of comparatives and will outline the sections of the grammar of [5] that apply to them. The central problem in parsing comparatives involves identifying the arguments of comparative predicates, and the relations borne by these arguments to such predi- cates. A corepresentational grammar is explicitly de- signed to assign predicate-argument structure to sen- tences on the basis of their surface syntactic organi- zation. SEC. 2 COMPARATIVE PREDICATES An initial assumption underlying the proposed analysis of, comparatives is that the comparative elements such as ~r~' faster, more spacious, are syntactically akin to icat-~, and thus that the principles applying to predicate-argument structure extend to them. Each com- parative element will accordingly have arguments (Subject and Object) assigned to it, and comparative predications will also be analyzed as being in relations of subordin- ation or superordination with other predications in the sentences in which they appear. For example, in (l) below, the comparative predicate richer will have both a simple NP Subject and a simple NP~t: (1) John knows doctors richer than Tom SUBJ ~" OBJ The referent of OBJ(richer), i.e. Tom, is to be inter- preted as the standar--d-o-~-compariso-n-against which the referen~ of doctors is Judged. The entire predication forms a term ~ i o n ('T') acting as OBJ(kn~ow), so that the whole relational analysis is as shown In (2). (2) John knows doctors richer than Tom I T suBJ T 0~J Pr/richer(T} su~J 0~J Because Pr/richer is included in an argument of another predicate ( ~ the former is in a relation subordinate to the latter. This analysis assumes three types of comparative predi- cates: adverbial, adjectival, and quantifier. Illustra- tions are given below: (3) Alice builds planes faster than robots fly them (4) John met people taller than Bob (5) Alice drank more beer than Helen The adverbial predicates are subcategorized as taking predicational arguments in both relations, and only such arguments; the other types can take nonpredicational arguments, though in some cases their Objects may be predicational. The grammar itself consists of two sets of principles. The first set consists of general constraints on sentence structure and applies as well to non-comparative con- structions. These principles are discussed in detail in [l] and [2] and will be presented here without justifi- cation. In addition there are a number of principles applying only to comparative constructions but non ad hoc in the sense that each can be applied toward the so- lution of a number of distinct problems of analysis. These principles are as follows: (6) Law of Correspondence Every NP or term in a sentence must be assigned a relational role. Ill (7) L~wof Uniqueness No two elements in a sentence may bear the same relation to a sinnle predicate unless they are coordinate or coreferential. Ill (8) Object Rule (OR) If P is an active transitive predicate~ OBJ(P) must be identified in such a way as to guarantee that as many segments thereof as possible occur to the right of P. Ill (g) ?~ulti-Predicate Constraint Every predicate in a sentence which contains more than one predicate must be in an ordination relation with some other predicate in that sentence.[4] (lO) Term Identification Principles a. Any predication with the internal structure OBJ-SUB-PREO may be analyzed as T. Any UP is a T. Any T satisfying either of these conditions is a SIMPLE TE~I. b. Any predication consisting solely of a compara- tive predicate with simple ~!P's as arguments is a T; such expressions will be called SIMPLE CO?IPARATIVE TE~.IS. All others will be COtlPLEX COMPARATIVE TE~IS. c. Any predication whose Subject occurs to the right of than, and whose predicate either occurs tot--E~-e left of than or occurs as SUBJ(do) where do itself occursto the right of than, is a T; s~h expressions will be called PRE-'DTCATE- CONTAIN~IG TERMS or PCT's. (ll) Comparative Object Rule The object of a comparative predicate is any term or predication satisfying the subcategorization of the predicate and which in- cludes some element occurin 0 immediately to the right of than. (12) Comparative-e-~ubject Rule The Subject of a compara- tive predicate must occur to the left of than. (13) Comparative Object Restriction The Object--o-? a nonadverbial comparative predicate must be a simple term unless the tiP occuring immediately to the right of than is SUBJ of a PCT; in that case, the OBJ of the non-adverbial comparative predicate must be a PC-term. These principles do not define a parsing algorithm per se; rather, they express certain surface true restric- tions which taken together and in concert with the gen- eral principles from Kac Zl ] and [2 ], define exactly the set of predicate argument structures assignable to a comparative construction. Since no particular analyt- ic procedure is associated with CORG, the assignment of particular analyses may be thought of either as a com- parison of complete potential relational analyses with the principles, whereby all potential analyses of the string not consistent with the grammar are discarded, or as a process of sequential assignments of partial analy- ses where each step is checked against the principles. The sequential method of analysis will be used here to present the operation of these principles; however, it is not a necessary adjunct to the grammar. 13 SEC. 4.0 STRUCTURE TYPES AND DESCRIPTIVE PROBLEMS There are three types of comparative predicates, already noted in section 2: adjectival, quantifier and adverbial. The differing subcategorization of these predicates does affect the possible analyses for a given sentence. Sev- eral other factors which influence the interpretation of the sentence are the position of the comparative predi- cate in the sentence, the degree of ellipsis in the than-phrase, and the subcategorization of surrounding p-~-~dicates. The effect of the type of predicate and the effect of the position of the predicate (in particular relative to than) will be considered separately in the following sect~o---"-ns. The effects of the degree of ellipsis in the ~than phrase and the subcategorization of surrounding predlcates will be considered together in section 4.3. It should be kept in mind however that all of these variables may act together in any combination to affect the type and number of interpretations a given sentence may have. SEC. 4.I SUBCATEGORI~.ATION AND PREDICATE TYPES The. effects of the type of comparative predicate on the interpretation can be noted in (3) and (4). The adverb- ial predicate faster in (3) takes predicational arguments only (ignoring f-T6"r"now the problem of lexical ambiguity) while the adjectival predicate taller takes non-predica- tional (.gP or Term) arguments. To see how these differences interact with the possible analyses which may be assigned, consider a complete analysis of (4). This analysis may begin with any ele- ment in the sentence. In most cases the assignment of the object of the comparative predicate, as the first step, will result in a more direct path to a complete analysis. Assume then, that Bob has been analyzed as O~(taller). This assignment-~atisfies the Comparative ObjecT~-uTe and is also consistent with the OR. (14) John met people taller than Bob. T Since neither met nor taller is a reflexive predicate, the Law of Unique'--'ness guarantees that Bob cannot be analyzed as OBJ (P), where P is any pr~-'Tcate (other than taller) as long as it is analyzed as OBJ(taller). Slnce t-TEe'F'~ are two non-reflexive predicates in this sentence (taller and m e_~.t), there are four remaininq re- lational ass-~g~ents whlch must be made before the analy- sis is complete. These are SUBJ(me_~.t), OBJ(met), SUBJ (taller) and some ordination relatlon betwee--n-the pred- icates met and taller. John or Either ~ people may be analyzed as SUBJ(taller) at this point since both satisfy the Comparative ~-~t Rule by occuring to the left of than. If John were assigned the relation SUBJ(taller-)--The analysis would violate some principles. A~for purposes of demon- stration, that John=SUBJ(taller). The relational analy- sis at this point would th--en be: (15) John met people taller than Bob SRBJ T o~J The remaining relational assignments would be OBJ(met), SUBJ(met) and some ordination relation for the two pred- icate~ The next apparently logical step would be to analyze people as O~j(me_~t). However, this will violate the OR, since it is possible to include mere than just the ;(P people as part of the OBJ(met). The OR requires that as many segments as possible-Eccuring to the rioht of a predicate be included in OBJ(P). The way to satis- fy this condition would be to analyze ~ as part of PR/taller. Then the OR would be satisfied by the maxi- mum number of elements (consistent with the grammar) which occur to the right of met. The only possible re- lation that people could bear to taller would be SUBJ (taller) sin~occurs to the l ~ than (see Com- parative Subject Rule). If it is analyzed as SUBJ(tal- • ler), then John can no longer be analyzed as SUBJ(talL ler). These steps would wive the following partial rela- tional representation: (16) John met people taller than Bob T SUBJ ~ OBJ PR/taller(T) OBj At this point in the analysis, the only relation which needs to be assigned still is SUBJ(met). The assignment of this relation to John is the only possible choice which violates no principle of the grammar and this as- signment would give a complete analysis. The analysis of (3) procedes along somewhat different lines due to the subcategorization of the adverbial comparative predicate faster, which requires predica- tional arguments. Thean~sis can begin as before by attempting to assign arguments to the comparative predi- cate faster. However, the first NP after than cannot be assigned to faster as OBJ since it is not a predicational arnument. The subcategorization of faster requires com- plete predications to be available b~arguments for it may be identified. Thus consider the other predi- cates, build and fly. Both are transitive predicates taking on--~simple HP's as arguments. The ~IP them must be analyzed as OBJ(fly) because of the OR. Th~mpar- ative OBJ Rule and ~ OR together will require robots to be analyzed as part of the PR/fly. Since robots occurs immediately to the right of than, it mus-Et-6"~in- cluded as part of the OBJ(faster) by--~Te Comparative OBJ Rule. The OR requires the"O-~J-~f any predicate to in- clude as many elements to the right of that predicate as possible. Therefore, if possible, fly and them must also be included as elements of OBJ~-?aster).----~ince faster is an adverbial predicate, itwl-'~TTT-allow a com- pe-l-eEe-predication (in fact requires) to be its object. Thus, all three of these aspects of the grammar work to- gether to force the string robots..fly..them to be anal- yzed as a predication PR/fly as shown below, with PR/fly analyzed as OBJ(faster)(as allowed by the Comparative OBJ Rule). (17) Alice builds planes faster than robots fly them T SUBJ OBj I" PR/flv OBJ At this point the arguments of build still need to be assigned and build and faster must be assigned some or- dination rela~ Sln~ter requires a complete predication for its subjec~ predication build must be built first. If any rip's other than AliceTplanes are used as arguments for builds, the anay--T'~s cou~ be completed. For example~obots were analyzed as OBJ(bullds) (as well as SUBJ(fly-]~-T, then either Alice or SlCOUld be analyzed as SUBJ(builds) completing d. (18) Alice builds planes faster than rgbo~s fly them SU~J I" "F OBa S~BJq" ~Bj PR/build PR/fly OBj PR/build could then be analyzed as SUBJ(faster) and all the necessary relations between arguments and predicates, and between predicates themselves(i.e, ordination rela- tions) would be assigned. However, the analysis would be ill-formed since one element, in this case lap_~, would be left unanalyzed in violation of the Law o? ~orrespon- dence. The only way this situation can be avoided, while at the same time not violating the OR or the Comparative Object Rule as discussed above for the OBJ(faster), would be to use only Alice and planes as arguments for builds. The OR would requlr~ that~.~ be analyzed as OB~ ~ (builds) leaving Alice to be analyzed as SUBJ(builds). This resulting pred--dT~'ation Pr/builds can then be anal- yzed as SUBJ(faster) completing the analysis with all rules in the grammar satisfied. (Ig) Alice bu~ds planes faster than robots fly them SU~V T OBj ~ SHR,/ "r' onj PR/builds SUBJ I P~/fIY OBJ 14 The most obvious differences between the analyses of (3) and (4) is in the types of arguments which the compara- tive predicates take and the ordination relations be- tween the predicates and the order in which the differ- ent predications were "built up". For (3), the argu- ments for the non-comparative predicates must be assigned first, before the arguments for the comparative predi- cate. This is required by the subcategorization of the adverbial predicate, which takes predicational arguments only. In this sentence, the non-comparative predicates are analyzed as subordinate to the comparative predicate. This too is a conseqence of the subcategorization of faster. For (4), the most efficient procedure for as--~ing relations (i.e. the one requiring the least backtracking) requires the arguments of the comparative predicate taller to be assigned first. In addition since the~egorization of this predicate allows only for non-predicational arguments, the comparative predicate is analyzed as subordinate to the non-compar- ative predicate in the sentence. Thus the type of com- parative predicate and its subcategorization affects the type of analysis provided by the grammar, and also the "optimal" order of relational assignments, when proce- dural aspects of the analysis are considered. SEC. 4.2 POSITION OF THE COMPARATIVE PREDICATE There are two aspects to the problem of the position of the comparative predicate: one involves the position of the SUBJ(COMP P) relative to than; the other involves the position of the entire comparative predication rela- tive to any other predicate in the string. SEC. 4.2.1 COORDI~IATE AND NON-COORDINATE ADVERBIAL COMPARATIVE CONSTRUCTIONS In some cases, the arguments of comparative predicates may be coordinate. This will always be the case for adverbial comparative predicates for which there is some ellipsis in the string as in (20) John builds planes faster than robots Here robots can be considered to be coordinate with either E-'~es or John, that is it can be interpreted as either t--h-e~-O'BJ(b~s) or as the OBd(builds). In non- adverbial comparative constructions, it will not always be the case that a single riP after than will be inter- preted as coordinate with some nother-"r-~TP. Consider the differences in possible interpretations between (4) and (21) (21) John met taller people than Bob (4) John met people taller than Bob For (4), there is only one possible interpretation, while there are two possible interpretations for (21). That is, in (21) Bob may simply be interpreted as OBJ(taller) correspond--dTng to the meaning of the sentence (22) John met people who are taller than Bob However, (21) has another interpretation in which Bob is interpreted as SUBJ(met). This case corresponds t~he interpretation of (23). (23) John met taller people than Bob did For this second interpretation, there are two subjects for me.__tt, i.e., John and Bob. This means that John and Bob must be forma---aITy def~d as coordinate arguments. l~-~'s formal definition is necessary since the Law of Uniqueness states that no two NP's may bear the same relation to a predicate (i.e. both be SUBJ(P i) unless they are coordinate or coreferentia1. Such a definition for rlP's such as John and Bob in (23) is not unreason- able since they bo--Eh--meet ~ basic requirements for coordinate elements. They are both interpretable as bearing the same relation to some Predicate Pi. The Comparative Object Restriction and a definition of coordinate comparative elements are required to precise- ly define the conditions under which two elements may be construed as coordinate in a comparative construction. The essence of the Coordinate Comparative Definition (not included here due to space considerations) is that any two elements may be coordinated by than if no non-adverbial comparative predicate occurs immediately to the left of than. The ultimate consequence of this condition is that only one interpretation is a11owed for constructions like (4) and this interpretation does not include any arguments coordinated by than. This means that in (4) for example there is no possl-'--%le analysis in which Bob can be SUBJ(met). In the coordinate interpretation of (22), (i.e., where John is coordinate with Bob) the final analysis of the s-ErTng will include the ~r6Tlowing predicational struc- ture: (24) John ~t taller pe?pleOBJ thans~ Pr/met(PCT) It is this term, then, which is assigned to the relation OBJ(taller), ~ being SUBJ(taller) (note that people plays two distlnct roles in this sentence). (25) John met taller peopl~ than Bqb "I ~ ~ OBQ SOBJ • F" " Pr/met(PCT) L SUBJ OBJ This particular assignment (of pr/met as OBJ(taller~ is allowed by the Comparative Object Restriction. That is, taller, being non-adverbial comparative predicate, is ~bcategorized for predicational arguments. But in (25) OBJ(taller) contains a predicate as one of its arguments. This particular predicational structure is defined as a Predicate Containing Term or PCT by the Term Definition~ The Comparative Object Restriction has the effect of al- lowing the OBJ(CO~P P) to be a PCT. Since the particular substring of (22), met..people..Bob need not be analyzed as a PCT, an altern~ive analysis for (22) is also pos- sible. The alternative analysis would be like that for (4), where only Beb=SUBJ(taller). That is, the Compar- ative Object Restriction does not necessarily require an analysis for (22) like (25); it merely allows it if cer- tai:n conditions set out in the Term Definition are met. The Comparative Object Restriction is quite important, then, in distinguishing the possible analysis for non- adverbial comparative constructions. It is equally Im- plant in obtaining the correct analysis for the sen- tence types to be discussed in the next section. SEC. 4.2.2 SUBJECT COMPARATIVES The position of the entire comparative predication, rela- tive to other predicates in the string is also quite im- portant in determining the possible types of analysis. Sentence (25) exhibits a subject comparative where the comparative predication occurs to the left of another predicate. It is useful to compare this sentence with the object comparative in (22) repeated here. (26) Taller people than Bob met John (22) John n~t taller people than Bob As has already been discussed in 4.2.1, (22) has two pos- sible interpretations. Sentence (26), however, has only one possible interpretation. Therefore there should be only one possible analysis. The analysis which needs to be avoided is (27) Taller people thans~ ~ m~ John T o~J I pr/m@t SUBJ OBJ This case must be disallowed while at the same time al- lowing the structure in (24) to be analyzed as OBJ(tal- ler). The Comparative Object Rule and the Term 15 Definitions work together to achieve this. The structure Pr/met shown in (28) does not meet the requirements set out for a PC-Term and the subcategorization of taller (i.e. non-predicational arguments only) will not allow Pr/met to be analyzed as an argument of taller unless it is analyzable as a PC-Term. Thus, the subcategorization of taller and the Comparative Object Restriction will both prevent the assignment of Pr/met as OBJ(taller)in (27). Since an analysis which includes (27) is not pos- sible, the only way the analysis can procede is as fol- lows. The Comparative Subject Rule will require people=SUBJ(taller) since it is the only tip to the left of than. Since Bob is the element occuring immediately to t-'h-e-right of~n, it is the only ~IP which can be analyzed as objec-'t--~f taller. The resulting predication Pr/taller is defined as a term by (IOb). (28) Taller peqple than B b met John ¢ s..J Pr/taller(T) The MP John must be analyzed as OBJ(met) to satisfy the OR, leav-~Pr/taller to be analyzed as SUBJ(met). This will also satisfy the )lultiPredicate Constraint since taller and met will be in some ordlnatlon relation as a res-'~. (2g) TallerLprxtaller(T)su)dpeqple~uB,] than ~jB b m it JofnOBd Pr/met No other analysis is possible since no non-comparative predicate occurs to the left of than (which would allow for possible coordinate interpretatl----~ons). SEC. 4.2.3 COMCLUSIONS The important points in this section are that for Sub- ject Comparatives such as (26), only one interpretation is possible, while for Object Comparatives such as {21), two interpretations are possible. Position of the com- parative predication relative to the rest of the string is thus an important factor in determining the number of possible interpretations. Position of individual NP's relative to than is also an important factor in deter- mining the number of possible interpretations a sentence may have; Sentences like (4),where no tIP occurs between than and the comparative predicate, have only one inter- pretation, ~lhile sentences like (ZIP, where an PIP does occur in the position, have two possible interpretations. The Comparative Object Restriction and the Term Defini- tions figure crucially in all these cases in the deter- mination of the correct number and type of possible analyses. SEC. 4.3 DEGREE OF ELLIPSIS AND SUBCATEGORIZATION O.~F SURROUtlDIr~G PREDICATES The degree of ellipsis following than in comparative structures is quite important in ~rmining the number of possible interpretations a structure may have. For example, in the first sentence of each pair below, where only a single predicate occurs before than, more than one interpretation is possible per str-~, while in the second sentence in each pair, where an PIP followed by some predicate occurs, only one interpretation is possible. (30) Alice builds planes faster than robots (31) Alice builds planes faster than robots do (32) John knows richer doctors than Alice (33) John knows richer doctors than Alice does The actual analysis of these sentences will not be presented here. Such sentences are discussed in detail in Ryan [5]. SEC. 4.3.1 DEGREE OF ELLIPSIS AND SUBCATEGORIZATION OF SURRDUMDING PREDICATES. The problem of degree of ellipsis interacts crucially with another factor, the subcateqorization of surround- ing predicates, in a very interesting way. Consider , the following sets of sentences. (34) John knows more doctors than lawyers debate (35) John knows more doctors than lawyer s debate psychiatrists (36) John knows more doctors than lawyersrun (37) John knows more doctors than lawyers spoke to (38) John hired more doctors than lawyers debate (39) *John hired more doctorsthan lawyers debate psychiatrists (40) *John hired more doctors than lawyers run (41) John hired more doctors than lawyers spoke to (42) John thinks more doctors than lawyers debate (43) John thinks more doctors than lawyers debate psychiatrists (44) John thinks more doctors than lawyers run (45) *John thinks more doctors than lawyers spoke to These sentences contain different combinations of com- parative predicates with either transitive or intrans- itive verbs following them and preceding verbs which take: either complement or NP objects (34~-(37); NP objects only (38-41); and complement objects only (42- 45). The type and number of interpretations depends on the subcategorlzation of these verbs and the verbs fol- lowing the comparative predicate. The flrst sentence in each group contains a transitive verb, debate, with no overt object. The second sentence in eac~group contains debate with an overt object. This results in (39) in an ungrammatical sentence, as compared with (38), and in (35) in a sentence with only one possible interpretation as compared with (34), which has two possible interpre- tations. The third sentence in each group contains an intransitive verb, run. This also results in an ungram- matical sentence for--T40) in the second group and in a sentence with only one interpretation, (36) in the first group. The last sentence in each group contains another transitive verb, spoke to, without an overt object. The difference between this~erb and debate is that debate is a so-called 'object deletable've-~'eF~-while spo]E~"~o- is not. Mote that in (45) this results in an ungra~at- lcal sentence (compare to 42) while in (37) the sentence is grammatical. However, in (37) the structure of the phrase more doctors than lawyers differs from its struc- ture in (35) and (36), in which more doctors than ~e tS the subject of the third verb. That is not in (37), where only la~ers is the subject of the third verb. It can be seen from this that the sub- categorization of the preceding the following predicates Is very Inq~ortant to the structure of the comparative predication. In addltlo~as the first two sentences in each group show, the degree of ellipsis also affects the structure. In all cases, the structure of the phrase more doctors than lawyers shifts in structure. The most important aspect of this data is the type of arguments which the comparative predicates must take. In these particular cases it is a change in the object of the comparative predicate which corresponds to a shift in the structure of the sentence. This is accounted for most directly by the rules in (lOp, (ll) and (13). For example, in (36) the OBJ(more) is lawyers and the co~q}lete predication Pr/more ~he S u r f run. This partial analysis is~wn in (46). (46) John knows more doqtors than lawxers r4n suBJ o~j T Pr/more(T) SUBJ 16 i In (38), the object of more is the sequence doctors.. lawyers..debate, a term according to (lOa). shown in the partial analysis in (47). (47) John hired more doctprs than lawyers debate T )OBJ SUBJ I" | Pr/debate(T) SqBJ ~qj Sentence (36) could not be analyzed as in (47) because run, the third verb in (36), is intransitive while de-e~ate, the third verb in (38), is transitive. Thus run cannot be included in any structure satisfying the Te~ Identification Principles (lO), while debate can be so analyze@. This means that run cannot be T~cluded as part of the OBJ(more). This is ~ranteed by the Comparative Object Restrlct-'---ion (13). Both of the analyses shown in (46) and (47) are possible for sentence (34) since knows may take predicational objects (in this case, more doctors than lawyers run) or it may take nonpredicatlonal objects such as the Complex comparative term in (47). Sentences (39) and (40) do not have possible analyses since hired cannot take predicational objects (such as that sho--o-wn-in (46)), and the presence of either an intransitive verb (run) or a transitive verb with an overt object (debate'-psychiatrists) after the compara- tive predicate, forces such a structure because of rules (lO) and (13). Sentence (41) would have a structure similar to (47). Sentences (42) - (44) v~uld all have structures similar to the partial analysis in (46). This is forced by the subcategorization of thinks, which takes only predica- tional objects. There--iT-no possible analysis for (45) since the subcateqorization of s o_~to, unlike debate, requires the presence of an overt object. But i a?-a-n-- object is assigned to spoke to, the result will ulti- mately be a structure Ti-Ee'-tlTat shown in (47). But the structure shown in (47) is a term and therefore nonpred- icational. This means it could not be analyzed as OBJ(thinks), while requires a predicational (complement) structure. Finally, it is precisely because a sentence with sooke to as the third verb must have a structure like (~TF-- TT.e. nonpredicational) that sentence (41) has a possible analysis in contrast to (45). That is, the structure of the string more doctors than lawyers spoke to in (49) has a nonpredicational (comparative term) structure. Since it is a term and not a predication, any verb tak- ing it as an argument must be subcategorized for nonpred- icational arguments. Think in (45) takes only predica- tional arguments in the---~ect relation, while hired in (41) takes only nonpredicational arguments in th-'-e-~'6ject relation. Thus, only the sentence with hired may take the comparative term as an argument. But sooke to does not allow the string more doctors than lawyers to simply be analyzed as its sub-ject, since no possible object would then be available for spoke to, However, if the string more doctors than lawyers is--not analyzed as SUBJ(spoke to), it will not be possible to analyze the string as a predication Pr/spoke to, thus blocking the analysis of the string as OBJ(think). SEC. 4.3.2 CONCLUSION The degree of ellipsis and the subcategorization of the surrounding predicates interact to affect the possible number and type of interpretations for each of the sen- tences in this section. That interaction can be most clearly seen in a comparison of (34) and (35) and (36). The verb know is subcategorized for either predicational or nonpred-i-E~tional arguments. This allows the string more doctors than lawyers debate to have two possible structures corresponding to the structures shown in (46) and (47). The.structure in (46) is a predicational structure while the structure in (47) is a nonpredica- tional structure. The subcategorization of knows allows either of those as possible interpretations of the OBJ (knows). Verbs subcategorized for only one type of ar- gument, say predicational, will allow only one of those possible structures of more doctors than lawyers .debate, in this case the predica'tional one shown in (46), to be analyzed as the object of that verb. This is one way in which the subcategorization of surrounding predicates affects the type and number of possible interpretations a sentence may have. The effect of the subcategorization of the following predicate parallels the effect of no ellipsis after than. Thus sentences (36) and (36) each have only one possib--bT~ interpretation and the relation of the string more doc- tors than lawyers is the same in each case; that is, it is the same as the predicational structure shown in (46), being the subject of the following predicate. Thus, the presence of an intransitive verb or the presence of a transitive verb plus an overt object to its right as in (35) and (36) forces a predicational structure of the type shown in (46). Since knows takes predicational objects, these sentences are still grammatical. If hired is substituted for knows . as in (39) and (40), the sentences are no longer grammatical, since the subcate- gorization of hired does not allow predication argument~ The last type of effect of the predicate following than is in some cases to force a nonpredicational structure like that shown in (47). The verb s~oke to is not an object deletable verb, while the verb debate does allow unspecified objects. For this reason,~erb sooke to cannot be part of a structure like that shown in-~6), .... since it would require the object of spoke to to be analyzed as "unspecified". Thus, the presence of a verb like spoke to after than forces the nonpredicational structure o?-the type--s-hown in (47), since in this struc- ture the object of ~ to would be overt. Since the presence of spoke to force's a nonpredicational structure for the string more--doctors than lawyers spoke to, it can only occur as part of an object of a verb which al- lows nonpredicational objects, like know or hired. It follows from this that if the string more doctors than lawyers spoke to occured after a verb which took predicationa'l arguments only, such as thinks, the result would be an ungrammatical sentence. This is in fact the case, as can be seen from sentence (45). SEC. 5 CONCLUSIONS The rules presented here provide an axiom system which allows only one possible analysis for each interpreta- tion of a sentence, and no possible analysis for sen- tences which are ungrammatical. The rules specifically proposed for comparatives have been shown to apply to a wide variety of construction types; for example, the Comparative Object Restriction and the Term Definitions figure crucially in the analysis of sentences in all the subsections of section 4. In addition, these rules are based on observations about characteristics of the sen- tences which are either directly observable in the string (e.g. left to right relative order) or which are a necessary ~art of any grammatical description (e.g. subclassification and subcategorization of verbs). Such a grammar can provide useful and accessible information for the problem of parsing as well as grammatical description. 17 REFERENCES I. Kac, Michael (1978) Corepr~sentation of Grammatical Structure. Hpls: Uni~rsity of Hlnnesota Press. 2. , (1980) "Corep~sentatlonal Grammar". In Syntax & Semantics 13, E. A. Moravcsik & J. R. Wirth (eds.). Academic Press. 3. Marcus, Mitchell (1980) A Theory of Syntactic Recognitio~ for Natural Languaqe. Cambridge, MA: ~T Press. 4. Rtndflesch, Tom (1978) "The General Structure of Hulti-Predlcatlonal Sentences in Engllsh" in Mlnnesota Papers 5, G. A. Sanders and )l. 8. Kac, eds. 5. Ryan, Karen L. (1981) A Surface Based.Analysis of En91tsh Comparative Constructions. H.A. Thesis, University of Minnesota. 18
1981
3
A TAXONOMY FOR ENGLISH NOUNS AND VERBS Robert A. Amsler Computer Sciences Department University of Texas, Austin. TX 78712 ABSTRACT: The definition texts of a machine-readable pocket dictionary were analyzed to determine the disambiguated word sense of the kernel terms of each word sense being defined. The resultant sets of word pairs of defined and defining words were then computaCionally connected into t~o taxonomic semi- lattices ("tangled hierarchies") representing some 24,000 noun nodes and 11,000 verb nodes. The study of the nature of the "topmost" nodes in these hierarchies. and the structure of the trees reveal information about the nature of the dictionary's organization of the language, the concept of semantic primitives and other aspects of lexical semantics. The data proves that the dictionary offers a fundamentally consistent description of word meaning and may provide the basis for future research and applications in computational linguistic systems. 1. INTRODUCTION In the late 1960"s, John 01ney et al. at System Development Corporation produced machine-readable copies of the Merriam-Webster New Pocke~ Dictionary and the Sevent~ Collegiate Dictionary. These massive data files have been widely distributed within the computational linguistic community, yet research upon the basic structure of the dictionary has been exceedingly slow and difficult due to the Significant computer resources required to process tens of thousands of definitions. The dictionary is a fascinating computational resource. It contains spelling, pronunciation, hyphenation, capitalization, usage notes for semantic domains, geographic regions, and propriety; etymological, syntactic and semantic information about the most basic units of the language. Accompanying definitions are example sentences which often use words in prototypical contexts. Thus the dictionary should be able to serve as a resource for a variety of computational linguistic needs. My primary concern within the dictionary has been the development of dictionary data for use in understanding systems. Thus I am concerned with what dictionary definitions tell us about the semantic and pragmatic structure of meaning. The hypothesis I am proposing is that definitions in the lexicon can be studied in the same manner as other large collections of objects such as plants, animals, and minerals are studied. Thus I am concerned with enunerating the classifications1 organization of the lexicon as it has been implicitly used by the dictionary's lexicographers. Each textual definition in the dictionary is syntactically a noun or verb phrase with one or more kernel terms. If one identifies these kernel terms of definitions, and then proceeds to disambiguate them relative to the senses offered in the same dictionary under their respective definitions, then one can arrive at a large collection of pairs of disambiguated words which can be assembled into a taxonomic semi-lattice. This task has been accomplished for all the definition texts of nouns and verbs in a comu~n pocket dictionary. This paper is an effort to reveal the results of a preliminary examination of the structure of these databases. The applications of this data are still in the future. What might these applications be? First, the data shoul'd provide information on the contents of semantic domains. One should be able to determine from a lexical taxonomy what domains one might be in given one has encountered the word "periscope", or "petiole", or "petroleum". Second, dictionary data should be of use in resolving semantic ambiguity in text. Words in definitions appear in the company of their prototypical associates. Third, dictionary data can provide the basis for creating case gr-,-~-r descriptions of verbs, and noun argument descriptions of nouns. Semantic templates of meaning are far richer when one considers the taxonomic inheritance of elements of the lexicon. Fourth. the dictionary should offer a classification which anthropological linguists and psycholinguists can use as an objective reference in comparison with other cultures or human memory observations. This isn't to say that the dictionary's classification is the same as the culture's or the human mind's, only that it is an objective datum from which comparisons can be made. Fifth. knowledge of how the dictionary is structured can be used by lexicographers to build better dictionaries. And finally, the dictionary if converted into a computer tool can become more readily accessible to all the disciplines seeking Co use the current paper-based versions. Education. historical linguistics, sociology. English composition, etc. can all make steps foxward given that they can assume access to a dictionary is immediately available via computer. I do not know what all these applications will be and the task at hand is simply an elucidation of the dictionary's structure as it currently exists. 2. "TANGLED" HIERARCHIES OF NOVN S AND VERBS The grant. MCS77-01315, '~)evelopment of a Computational Methodology for Deriving Natural Language Semantic Structures via Analysis of Machine-Readable Dictionaries". created a taxonomy for the nouns and verbs of the Merriam-Webster Pocket Dictionary (MPD), based upon the hand-disambiguated kernel words in their definitions. This taxonomy confirmed the anticipated structure of the lexicon to be that of a "tangled hierarchy" [8,9] of unprecedented size (24,000 noun senses. 11.000 verb senses). This data base is believed to be the first Co be assembled which is representative of the structure of the entire English lexicon. (A somewhat similar study of the Italian lexicon has been done [2.11] ). The content categories agree substantially with the semantic structure of the lexicon proposed by Nida [I5], and the verb taxonomy confirms the primitives proposed by the San Diego LNR group [16]. This "tangled hierarchy" may be described as a formal data structure whose bottom is a set of terminal disambiguated words that are not used as kernel defining terms; these are the most specific elexents in the structure. The tops of the structure are senses of words such as "cause", "thing", '*class", "being", etc. These are the most general elements in the tangled hierarchy. If all the top terms are considered to be 133 members of the metaclass "<word-sense>", the tangled forest becomes a tangled tree. The terminal nodes of such trees are in general each connected to the Cop in a lattice. An individual lattice can be resolved into a seC of "traces", each of which describes an alternate paCh from terminal to cop. In a crate, each element implies the terms above iC, and further specifies the sense of the elements below it. The collection of lattices forms a transitive acyclic digraph (or perhaps more clearly, a "semi-lattice", that is, a lattice with a greatest upper bound, <word-sense>, but no least lower bound). If we specify all the traces composing such a structure, spanning all paths from top to bottom, we have topologically specified the semi-lattice. Thus the list on the left in Figure I topologically specifies the tangled hierarchy on its right. (a b c e f) a (a b c gk) I (a b d g k) I (a b c g I) b (a bd gl) / \ (abc gin) / \ (a b d g m) c d (a b d i) II I \ / J / • I / [ f / I I/ f g /I\ /I\ / I \ k 1 m \ i Figure I. The Trace of a Tangled Hierarchy 2.1 TOPMOST SEMANTIC NODES OF THE TANGLED HIERARCHIES Turning from the abstract description of the forest of tangled hierarchies Co the actual data, the first question which was answered was, 'What are the largest tangled hierarchies in the dictionary?". The size of a tangled hierarchy is based upon two numbers, the maximum depth below the "root" and the total number of nodes transitively reachable from the root. Thus the tangled hierarchy of Figure 1 has a depth of 5 and conCains a total of 11 nodes (including the "root" node, "a"). However, since each non-terminal in Che tangled hierarchy was also enumerated, it is also possible Co describe the "sizes" of che other nodes reachable from "a". Their number of elemenCs and depChs given in Table 1. Table 1. Enumeration of Tree Sizes and Depths of Tangled Hierarchy Nodes of Figure 2 Tree Maximum Rooc Size Depth Node ii 5 a 10 4 b 6 3 c 6 2 d 4 l g 2 I e These examples are being given co demonstrate the inherenC consequences of dealing wich tree sizes based upon these measurements. For example, "g" has the most single-level descendants, 3, yet it is neither at the Cop of the Cangled hierarchy, nor does iC have the highest total number of descendants° The root node "a" is at the top of the hierarchy, yet it only has I single-level descendant. For nodes ¢o he considered of major importance in a tangled hierarchy it is chus necessary to consider not only Cheir total number of descendants, buc whether Chese descendants are all accually immediately under some ocher node Co which this higher node is attached. As we shall see, che nodes which have the most single-level descendants are actually more pivoral concepts in some cases. Turning to the actual forest of Cangled hierarchies, Table 2 gives the frequencies of the size and depth of the largest noun hierarchies and Table 3 gives the sizes alone for verb hierarchies (depths were noc oompuced for these, unfortunately). Table 2. Frequencies and Maxim,-. Depths of MPD Tangled Noun Hierarchies 3379 I00NE-2.1A 1068 13 MEASUREMENT-I.2A 2121 12 BULK-I.IA 1068 ** DIMENSION-.IA 1907 10 PARTS-I.1A/! 1061 ** LENGTH-.IB 1888 10 SECTIONS-.2A/! 1061 ** DISTANCE-I.IA 1887 9 DIVISION-.2A 1061 14 DIMENSIONS-.IA 1832 9 PORTION-I.4A 1060 11 SZZE-I.0A 1832 8 PART-I.IA 1060 13 MEASURE-I.2A 1486 14 SERIES-.0A 1060 I0 EXTENT-.IA 1482 18 SUM-I.IA I060 14 CAPACITY-.2A 1461 ** AMOUNT-2.2A 869 7 HOUSE-I.1A/+ 1459 8 ACT-I.1B 836 7 SUBSTANCE-.2B 1414 ** TOTAL-2.0A 836 8 MATTER-I.4A 1408 15 NUMBER-I.IA 741 8 NENS-.2A/+ 1379 14 AMOUNT-2.1A 740 6 PIECE-I.2B 1337 80NE-2.2A 740 7 ITEM-.2A 1204 5 PERSON-.IA 686 7 ELZMENTS-.IA 1201 14 OPERATIONS-.IA/÷ 684 6 MATERIAL-2.1A 1190 ~r* PROCESS-I.4A 647 9 THING-.4A 1190 14 ACTIONS-.2A/+ 642 8 ACT-I.IA 1123 6 GROUP-I.OA/! 535 6 THINGS-.SA/! ii01 12 FOEM-I.13A 533 6 MEMBER-.2A 1089 12 VAEIETY-.4A 503 I0 PLANE-4.1A 1083 Ii MODE-.IA 495 6 STRUCTURE-.2A 1076 I0 STATE-I.IA 494 I0 RANK-2.4A 1076 9 CONDITION-I.3A 493 9 STEP-I°3A *~ = ouC of range due to dace error Table 3. Frequencies of Topmost MPD Tangled Verb Hierarchies 4175 RZMAIN-. 4A 365 GAIN-2.1A 417 5 CONTINUE-. IA 334 DRIVE- I. IA/+ 4087 MAINTAIN- .3A 333 PUSH-I .IA 4072 STAND-1.6A 328 PRESS-2 olB 4071 HAVE-1.3A 308 CHANGE- I .IA 4020 BE- .IB 289 MAKE- 1.10A 3500 EQUAL-3.0A 282 COME- .IA 3498 BE- .IA 288 CHANGE-I .IA 3476 CAUSE-2.0A 283 EFFECT- 2 .IA 1316 APPEAR- .3A/C 282 ATTAIN-. 2B 1285 EXIST-. IA/C 281 FORCE-2.3A 1280 OCCUR- .2A/C 273 PUT- .IA 1279 MAKE-I .IA 246 IMPRESS-3.2A 567 GO-1 .iB 245 URGE- 1.4A 439 BRING- .2A 244 DRIVE-I .IA 401 MOVE- I .IA 244 IMPEL- .0A 366 GET-I .IA 244 THRUST- I .IA While the verb tangled hierarchy appears co have a series of nodes above CAUSE-2.0A which have large numbers of descendants, the actual structure more closely resembles chat of Figure 2. 134 remain-.&a <--> continue-.la <-- maintain-.3a I stand-l.6a have-1.3a t be-.lb equal-3.0a 7 be-.la cause-2.0a t ? 8o-l.la < > make-l.la make-l.la Figure 2. Relations between Topmost Tangled Verb Hierarchy Nodes The list appears in terms of descending frequency. The topmost nodes don't have many descendants at one level below, but they each have one BIG descendant, the next node in the chain. CAUSE-2.0A has approximately 240 direct descendants, and MAKE-I.IA has 480 direct descendants making these t~o the topmost nodes in terms of number of direct descendants, though they are ranked 9th and 13th in terms of total descendants (under words such as EDL%IN-.4A, CONTINUE-.1A, etc.). This points out in practice what the abstract tree of Figure I showed as possible in theory, and explains the seeming contradiction in having a basic verb such as "CAUSE-2.0A" defined in terms of a lesser verb such as '~EMAIN-.4a". The difficulty is explainable given two facts. First. the lexicographers HAD to define CAUSE-2.0A using some other verb, etc. This is inherent in the lexicon being used to define itself. Second, once one reaches the Cop of a tengled hierarchy one cannot go any higher -- and consequently forcing further definitions for basic verbs such as "be" and "cause" invariably leads CO using more specific verbs, rather than more general ones. The situation is neither erroneous, nor inconsistent in the context of a self-defined closed system and will be discussed further in the section on noun primitives. 2.2 NOUN PRIMITIVES One phenomenon which was anticipated in computationally grown trees was the existence of loops. Loops are caused by having sequences of interrelated definitions whose kernels form a ring-like array [5.20]. However. what was not anticipated was how important such clusters of nodes would be both co the underlying basis for the Caxonomies and as primitives of the language. Such circularity is sometimes evidence of a truly primitive concept, such as the set containing the words CLASS, GROUP, TYPE, KIND, SET. DIVISION, CATEGORY. SPECIES, INDIVIDUAL, GROUPING, PART and SECTION. To understand this, consider the subset of interrelated senses these words share (Figure 3) and then the graphic representation of these in Figure 4. GROUP 1.0A - a number of individuals related by a common factor (as physical association, community of interests, or blood) CLASS 1,1A - a KrouD of the same general status or nature TYPE 1.4A - a c~ass, k~nd, or 2rouo set apart by com~on characteristics KIND Io2A - a 2rouv united by common traits or interests KIND 1.2B - CATEGORY ,CATEGORY .0A - a division used in classification ; CATEGORY .0B - CLASS, GROUP, KIND DIVISION .2A one of the Darts, sections, or =rouDinas into which a whole is divided *GROUPING <-" W7 - a set of objects combined in a group SET 3.5A - a zrouv of persons or things of the same kind or having a common characteristic usu. classed together SORT 1.1A - a 2tour of persons or things that have similar characteristics SORT 1.1B - C~%SS SPECIES .IA - ~ORT, KInD SPECIES .IB - a taxonemic group comprising closely related organisms potentially able co breed with one another Key: * The definition of an MPD run-on, taken from Webster's SevenE~ Colle2iate Dictionary to supplement the set. Figure 3. Noun Primitive Concept Definitions SET 3.5A t / GROUPINGS* one of the PARTS* SECTIONS* l / / DIVISION . 2A ? / / / / CATEGORY .0A % \ \ \ KIND 1.2B I I\ I SPECIES . IA .... \ \ number of INDIVIDUALS \ 7 \ / \ / ¼ / CROUP 1.0A < ......... 7 t t % / / \ \ / I I \ / I I \ / CLASS KIND \ / 1 .IA 1.2A I I tt% t I CATEGORY .0S I TYPE 1.4A I tl I I I I I I I I I I I I SORT I.IB I SORT 1.1A / / SPECIES .IB Figure 4. "GROUP" Concept Primitive from Dictionary Definitions * Note: SECTIONS, PARTS, and GROUPINGS have additional connections not shown which lead to a related primitive cluster dealing with the PART/WHOLE concept. This complex interrelated set of definitions comprise a primitive concept, essentially equivalent to the notion of SET in mathematics. The primitiveness of the set is evident when one attempts to define any one of the above words without using another of them in that definition. 135 This essential property, the inability to write a definition explaining a word's meaning without using another member of some small set of near synonymous words, is the basis for describing such a set as a PRIMITIVE. It is based upon the notion of definition given by Wilder [21], which in turn was based upon a presentation of the ideas of Padoa, a turn-of-the-century logician. The definitions are given, the disambiguation of their kernel's senses leads to a cyclic structure which cannot be resolved by attributing erroneous judgements to either the lexicographer or the disambiguator; therefore the structure is taken as representative of an undefinable pyimitive concept, and the words whose definitions participate in this complex structure are found Co be undefinable without reference to the other members of the set of undefined terms. The question of what to do with such primitives is not really a problem, as Winograd notes [22], once one realizes that they must exist at some level, just as mathematical primitives must exist. In tree construction the solution is to form a single node whose English surface representation may be selected from any of the words in the primitive set. There probably are connotative differences between the members of the set. but the ordinary pocket dictionary does not treat these in its definitions with any detail. The Merriam-Webster CollemfaCe Dictionary does include so-called "synonym paragraphs" which seem to discuss the connotative differences between words sharing a "ring". While numerous studies of lexical domains such as the verbs of motion [1,12,13] and possession [10] have been carried out by ocher researchers, it is worth noting that recourse to using ordinary dictionary definitions as a source of material has received little attention. Yet the "primitives" selected by Donald A. Norman, David E. Romelhart, and the LNR Research Group for knowledge representation in their system bear a remarkable similarity to those verbs used must often as kernels in The Merriam-Webster Pocket Dictionary and Donald Sherman has shown (Table 4) these topmost verbs to be among the most common verbs in the Collegiate Dictionary as well [19]. The most frequent verbs of the MPD are, in descending order. MAKE, BE, BECOME, CAUSE, GIVE, MOVE, TAKE, PUT, FORM, BEING, HAVE. and GO. The similarity of these verbs to those selected by the LNH group for their semantic representations, i.e., BECOME, CAUSE, CHANGE, DO, MOVE. POSS ("have"), T~SF ("give","take"), etc., [10.14.18] is striking. This similarity is indicative of an underlying "rightness" of dictionary definitions and supports the proposition that the lexical information extractable frca study of the dictionary will prove to be the same knowledge needed for computational linguistics. The enumeration of the primitives for nouns and verbs by analysis of the tangled hierarchies of the noun and verb forests grown from the MPD definitions is a considerable undertaking and one which goes beyond the scope of this paper. To see an example of how this technique works in practice, consider the discovery of the primitive group starting from PLACE-1.3A. place-l.3a - a building or locality used for a special purpose The kernels of this definition are "building" and "locality". Lookiog these up in turn we have: building-.la a usu. roofed and wailed structure (as a house) for permanent use locality-.0a a particular ShOt, situation, or location 136 Table 4. 50 Most Frequent Verb Infinitive Forms of W7 Verb Definitions (from [19]). 1878 MAKE 157 FURNISH 908 CAUSE 154 TURN 815 BECOME 150 GET 599 GIVE 150 TREAT 569 BE 147 SUBJECT 496 MOVE 141 HOLD 485 TAKE 137 UNDERGO 444 PUT 132 CHANGE 366 BRING 132 USE 311 HAVE 129 KEEP 281 FoRM 127 ENGAGE 259 GO 127 PERFORM 240 SET 118 BREAK 224 COME 118 REDUCE 221 REMOVE 112 EXPRESS 210 ACT 107 ARRANGE 204 UTTER 107 MARK 190 PASS 106 SEFARATE 188 PLACE 105 DRIVE 178 COVER 104 CARRY 173 CUT I01 THR02 169 PROVIDE 100 SERVE 166 DRAW 100 SPEAK 163 STRIKE 100 WORK This gives US four OeW terms, "structure", "SpOt", "situation", and "location". Looking these up we find the circularity forming the primitive group. structure-.2a - ~ built (as a house or a dam) spot-l.3a - LOCATION, SITE location-.2a - SITUATION, PLA~ situatiou-.la - location, site And finally, the only new term we encounter is "site" which yields, site-.Oa - location <~ of a building> <battle *> The primitive cluster thus appears as in Figure 5. something (built) , I I site-l.3a .. > site-.0a J T T I I I / I ] J situation-.l a J structure-.2a I ~ ~\ I I l \\ I I I building-.la T locality-.Oa ~ > locatio~-.2a T I I I I I place-1.3a <, Fisure 5. Diagram of Primitive Bet Containing PLACE. LOCALITY, SPOT, SITE, SITUATION, and LOCATION 2.3 NOUNS TERMINATING IN RELATIONS TO oTHER NOUNS OR VERBS In addition to terminating in "dictionary circles" or "loops", nouns also terminate in definitions which are actually text descriptions of case arguments of verbs or relationships to other nouns. "Vehicle" is a fine example of the former, being as it were the canonical instrumental case argument of one sense of the verb "carry" or "transport". vehicle - a means of carrying or transporting something '~eaf" is an example of the letter, being defined as a part of a plant, leaf - a usu. flat and green outgrowth of a plant stem that is a unit of foliage and functions esp. in photosynthesis. interaction of the PART-OF and ISA hierarchies. Historically even Raphael [17] used a PART-OF relationship together with the ISA hierarchy of gig's deduction system. What however is new is that I am not stating "leaf" is a part of a plant because of some need use this fact within a particular system's operation. but "discovering" this in a published reference source and noting that such information results naturally from an effort to assemble the complete lexical structure of the dictionary. 2.4 PARTITIVES AND COLLECTIVES Thus "leaf" isn't a type of anything. Even though under a strictly genus/differentia interpretation one would analyze "leaf" as being in an ISA relationship with "outgrowth", "outgrowth" hasn't a suitable homogeneous set of members and a better interpretation for modeling this definition would be to consider the "outgrowth of" phrase to signify a part/whole relationship between "leaf" and "plant". Hence we may consider the dictionary to have at least two taxonomic relationships (i.e. ISA and ISPART) as well as additional relations explaining noun terminals as verb arguments. One can also readily see that there will be taxonomic interactions among nodes connected across these relationship "bridges". While the parts of a plant will include the "leaves", "stem", "roots", etc., the corresponding parts of any TYPE of plant may have further specifications added to their descriptions. Thus "plant" specifies a functional form which can be further elaborated by descent down its ISA chain. For example, a "frond" is a type of "leaf", frond - a usu. large divided leaf (as of a fern) We knew from "leaf" that it was a normal outgrowth of a "plant", but now we see that "leaf" can be specialized, provided we get confirmation from the dictionary that a "fern" is a "plant". (Such confirmation is only needed if we grant "leaf" more than one sense meaning, but words in the Pocket Dictionary do typically average 2-3 sense meanings). The definition of "fern" gives us the needed linkage, offering, fern - any of a group of flowerless seedless vascular green plants Thus we have a specialized name for the "leaf" appendage of a "plant" if that plant is a "fern". This can be represented as in Figure 6. ISPART leaf ------=='''''> plant /\ /\ II II II II II II ISA II II ISA II il II II II II II ISPART [[ frond =====~=~==="==''> fern Figure 6. LEAF:PLANT::FHOND:FERN This conclusion that there are two major transitive taxonomies and that they are related is not of course new. Evens etal. [6,7] have dealt with the PART-OF relationship as second only to the ISA relationship in importance, and Fahlmen [8,9] has also discussed the As mentioned in Section 2.3, the use of "outgrowth" in the definition of "leaf" causes problems in the taxonomy if we treat "outgrowth" as the true genus term of that definition. This word is but one ~*-mple of a broad range of noun terminals which may be described as "partitives". A "partitive" may be defined as a noun which serves as a general term for a PART of another large and often very non-homogeneous set of concepts. Additionally. at the opposite end of the partitive scale, there is the class of "collectives". Collectives are words which serve as a general term for a COLLECTION of other concepts. The disambiguators often faced decisions as to whether some words were indeed the true semantic kernels of definitions, and often found additional words in the definitions which were more semantically appropriate to serve as the kernel -- albeit they did not appear syntactically in the correct position. Many of these terms were partitives and collectives. Figure 7 shows a set of partitives and collectives which were extracted and classified by Gretchen Hazard and John White during the dictionary project. The terms under "group names", "whole units", and "system units" are collectives. Those under "individuators". "piece units". "space shapes", "existential units", "locus units", and "event units" are partitives. These terms usually appeared in the syntactic frame "An of" and this additionally served to indicate their functional role. I QUANTIFIERS 3 EXISTENTIAL UNITS I.i GROUP NAMES 3.1 VARIANT pair.collection.group version.form, sense cluster,bunch. band (of people) 3.2 STATE state,condition 1.2 INDIVIDUATORS member.unit,item. 4 REFERENCE UNITS article,strand, branch 4.1 LOCUS UNITS (of science, etc.) place.end,ground, point 2 SHAPE UNITS 4.2 PROCESS UNITS 2.1 PIECE UNITS cause,source,means. sample,bit,piece, way.manner tinge,tint 5 SYSTEM UNITS 2.2 WHOLE UNITS system, course,chain. mass,stock,body, succession.period quantity.wad 6 EVENT UNITS 2.3 SPACE SHAPES act,discharge, bed,layer.strip,belt, instance crest,fringe,knot. knob,tuft 7 EXCEPTIONS growth.study Figure 7. Examples of Partitives and Collectives [3] 137 ACKNOWLEDGEMENTS This research on the machine-readable dictionary could not have been accomplished without the permission of the G. & C. Merriam Co., the publishers of the Merriam- Webster New Pocket Dictiouar7 and the Merriam-Webster Seventh C911e~iate Dictionary as well as the funding support of the National Science Foundation. Thanks should also go to Dr. John S. White. currently of Siemens Corp., Boca Eaton, Florida; Gretchen Hazard; and Drs. Robert F. Si,--~ns and Winfred P. Lehmann of the University of Texas at Austin. REFERENCES I. Abrahameon, Adele A, "Experimantal Analysis of the Semantics of Movement." in Explorations in Cognition, Donald A. Norman and David E. Rumelhart. ed., W. H. Freeman, San Francisco, 1975, pp. 248-276. 2. Alinei, Matin, La struttura del lessico, II Mulino, Bologna. 1974. 3. Amsler. Robert A. and John S. White. "Final Report for NSF Project MCS77-01315, Development of a Computational Methodology for Deriving Natural Language Semantic Structures via Analysis of Machine-Readable Dictionaries," Tech. report. Linguistics Research Center, University of Texas at 4. Austin, 1979. Amsler. Robert A., The Structure of the Merriam-Webster Pocket D~ctionarv. PhD dissertation, The University of Texas at Austin, December 1980. 5. Calzolari. N., "An Empirical Approach to Circularity in Dictionary Definitions," Cahiers de Lexicolo~ie, Vol. 31. No. 2, 1977. pp. 118-128. 6. Evens, Martha and Raoul Smith. "A Lexicon for a Computer Quest ion-Answering System," Tech. report 77-14, lllinois Inst. of Technology, Dept. of Computer Science, 1977. 7. Evens, Martha. Bonnie Litowitz. Judith Markowitz, Raoul Smith and Oswald Werner. L~x~c~l-Semantic Relations: A__ Comp§rativ~ Su%-vqy. Linguistic Research. Carbondale, 1980. 8. Fahlman, Scott E., "Thesis progress report: A system for representing and using real-world knowledge," Al-Memo 331, M.I.T. Artificial Intelligence Lab., 1975. 9. Fahlman, Scott E., _A System for ReDresentin~ and Usin~ Rqq~-World Know led2e. PhD dissertation, M.I.T., 1977. 10. Gentner. Dedre, "Evidence for the Psychological Reality of Semantic Components: The Verbs of Possession," in Explorations in Cognition. Donald A. Norman and David E. Rumelhart. ed., W. R. Freeman, San Francisco, 1975, pp. 211-246. 11. Lee, Charmaine, "Review of L__%a struttura del lessico by Matin Alinei." Lan2ua~e, Vol. 53, No. 2, 1977, pp. 474-477. 12. Levelt, W. J. M., R. Schreuder. and E. Hoenkamp, "Structure and Use of Verbs of Motion." in Recent Advances in the Psvcholoev of Laneua~e. Robin Campbell and Philip T. Smith. ed., Plenum Press, New York, 1976, pp. 137-161. 13. Miller. G., "English verbs of motion: A case study in semantic and lexical memory." in Codine Processes in Human Memory. A.W. Melton and E. Martins, ed., Winston. Washington. D.C., 1972. 14. Munro. Allen. '~Linguistic Theory and the LNR Structural Representation." in Exml orations in Coenition , Donald A. Norman and David E. Runelhart, ed., W. H. Freeman. San Francisco. 1975, pp. 88-113. 15. Nida. Eugene A., Exnlorin2 S~autic Structures. Wilhelm Fink Verlag. Munich. 1975. 15. Norman, Donald A., and David E. Rumelhart. Exnlorations in C~nition. W.H.Freeman. San Francisco, 1975. 17. Raphael. Bertram, ~IR: A Comnuter Pro2raln for Semantic Information Retrieval, PhD dissertation. M.I.T., i%8. 18. Runelhart, David E. and James A. Lenin. "A Language Comprehension System." in Exolor ations in Co2nition, Donald A. Norman and David E. Rumelhart. ed., W. H. Freo--n, San Francisco. 1975, pp. 179-208. 19. Sherman, Donald, "A Semantic Index to Verb Definitions in Webster's Seventh New Colle~iate Dictionary." Research Report. Computer Archive of Language Materials, Linguistics Dept., Stanford University. 1979. 20. Sparck Jones, Karen. '*Dictionary Circles," SDC document TM-3304, System Development Corp., January 1%7. 21. Wilder. Raymond L., Introduction to the Foundations of ~ , John Wiley & Sons, Inc., New York, I%5. 22. Winograd, Terry, "On Primitives, prototypes, and other semantic anomalies," Proceedin2s of the Workshoo on Theoretical Issues in Natural Laneuaee Processin2. June 10-13, 1975~ ~ . ~qls., Schank, Roger C., and B.L. Nash-Webber. ed., Assoc. for Comp. Ling., Arlington, 1978, pp. 25-32. 138
1981
30
1. Introduction INTERPRETING NATURAL LANGUAGE DATABASE UPDATES S. Jermld Kaplan Jim David,son Computer Science Dept. Stanford University Stanford, Ca. 94305 Although the problem of querying a database in natural language has been studied extensively, there has been relatively little work on processing database updates expressed in natural language. To interpret update requests, several linguistic issues must be addressod that do not typically pose difficulties when dealing exclusively with queries. This paper briefly examines some of the linguistic problems encountered, and describes an implemented system that performs simple natural language database update& The primary difficulty with interpreting natural language updates is that there may be several ways in which a particular update can be performed in the underlying database. Many of these options, while literally correct and semantically meaningful, may correspond to bizarre interpretations of the request. While human speakers would intuitively reject these unusual readings, a computer program may be unable to distinguish them from more appropriate ones. If carried out, they often have undesirable side effects on the database, For example, a simple request to "Change the teacher of CS345 from Smith tb Jones" might be carried out by altering the number of a course that Jones already teaches to be CS345, by changing Smith's name to b- Jones, or by modifying a "teaches" link in the database. While all of these may literally carry Otlt the update, they may implicitly cause unanticipated changes such as altering Jones' salary to be Smith's, Our approach to this problem is to generate a limited set of "candidate" updates, rank them according to a set of domain- independent heuristics that reflect general properties of "reasonable" updates, and either perform the update or present the highest ranked options to the user for selection. This process may be guided by various linguistic considerations, such as the difference between "transparent" and ""opaque" readings of the user's request, and the interpretation of counterfactual conditionals. Our goal is a system that will process natural language updates, explaining problems or options to the user in terms that s/he can understand, and effecting the changes to the underlying database with the minimal disruption of other views. At this time, a pilot implementation is complete. 2. Generating Candidate Updates Before an appropriate change can be made to a database in response to a natural language request, it is useful to generate a set of "candidate" updates that can then be evaluated for plausibility. In most cases, an infinite number of changes to the database are possible that would literally carry out the request (mainly by creating and inserting "dummy" values and links). However, this process can be simplified by generating only candidate updates that can be directly derived from the user's phrasing of the request. This limitation is justified by observing that most reasonable updates correspond to different readings of expressions in referentially opaque contexts. A referentially opaque context is one in which two expressions that refer to the same real world concept cannot be interchanged in the context without changing the meaning of the utterance [Quine. 1971]. Natural language database updates often contain opaque contexts, For example, consider that a particular individual (in a suitable database) may be referred to as "Dr. Smith", "the instructor of CSI00", "the youngest assistant professor", or "the occupant of Rm. 424". While each of these expressions may idem, fy the same database record (i.e. they have the same extension), they suggest different methods for locating that record (their intensions differ). In the context of a database query, where the goal is to unambiguously specify the response set (extension), the method by which they are accessed (the intension) does not normally affect the response (for a counierexample, however, see [Nash-Wcbber, 1976]). Updates, on the other hand, are often sensitive to the substitution of extensionally equivalent referring expressions. "Change the instructor of CS100 to Dr. Jones." may not be equivalent to "Change the youngest assistant professor to Dr. Jones." or "Change Dr. Smith to Dr. Jones." Each of these may imply different updates to the underlying database,. This characteristic of natural language updates suggests that the generation of candidate updates can be performed as a language driven inference [Kaplan, 1978] without severely limiting the class of updates to be examined. "Language driven inference" is a style of natural language processing where the infcrencing process is driven (and hence limited) by the phrasing of the user's request. Two specific characteristics of language driven inference arc applied here to control the generation process. First, it is assumed that the underlying database update must be a series of transactions of the same type indicated in the request. That is. if the update requests a deletion, this can only be mapped into a series of deletions in the database. Second, the only kinds of database records that can be changed are those that have been mentioned in some form in the actual request, or occur on paths linking such record¢ In observing these restrictions, the program will generate mainly updates that correspond to different readings of potentially opaque references in the original request. 3. Selecting Appropriate Updates At first examination, it would seem to be necessary to incorporate a semantic model of the domain to select an appropriate update I'mm the candidate updates. While this approach would surely be effective, the overhead required to encode, store, and process this knowledge for each individual database may be prohibitive in practical applications. What is needed is a general set of heuristics that will select an appropriate update in a reasonable majority of cases, without specific knowledge of the domain. 139 ]he heuristics that are applied to rank the candidate updates are based on the idea that the most appropriate one is likely to cause the minimum number of side effects to the user's conception of the database. This concept is developed formally in the work of Lewis, presented in his book on Counterfactuals [Lewis, 1973]. In this Work, Lewis examines the meaning and formal representation of such statements as "If kangaroos had no tails, they.would topple over." (P.8) He argues that to evaluate the correctness of dlis statement (and similar counterfactual conditionals) it is necessary to construct in one's mind the possible world minimally different from the real world that could potentially contain the conditional (the "nearest" consistent world). He points out that this hypothetical world does not differ only in that kangaroos don't have tails, but also reflects other changes required to make that world plausible. Thus he rejects the idea that in the hypothetical world kangaroos might use crutches (as not being minimally different), or that they might leave the same tracks is the sand (as being inconsistent). The application of this work to processing natural language database updates is to regard each transaction as presenting a "counterfactuar' state of the world, and request that the "nearest" reasonable world in which the counterfactual is true be brought about. (For example, the request "Change the teacher of CS345 from Smith to Jones." might correspond to the counterfactual "If Jones taught CS345 instead of Smith. how would the databasc be different?" along with a speech act requesting that the database be put in this new state.) To select this nearest world, the number ,and type of side effects are evaluated for each candidate update, and they are ranked accordingly. Side effects that disrupt the user's view--taken to be the subset of the database that has been accessed in previous transactions--are considered more "severe" than changes to portions of the database not in that view. In data processing terms, the update with the fewest side effects on the user's data sub-model is selected as the most appropriate. Updates that violate syntactic or semantic constraints implicit in the database smtcture and content can be eliminated as inconsistent. Functional dependencies, where one attribute uniquely determines another, are useful semantic filters (as in the formal update work of" [Dayal. 1979]). When richer semantic data models are available, such as the Str~:ctural Model of [Wiederhold and E1-Masri, 1979], more sophisticated constraints can be applied. (The current implementation does not make use ofany such constrain~) While this approach can .certainly rail in cases where complex domain • semantics rule out the "simplest" change-the one with the fewest side effects to the user's view--in the majority of cases it is sufficient to select a reasonable update from among the various possibilities, 4. An Example The following simple example of" this technique illustrates the uscfuln¢,~ of the proposed approach in practical databases. [t is drawn From the current pilot implementation. The program is written in Interlisp [Teitelman, 1978]. and runs on a DEC KL-10 under Tenex. An update expressed in a simple natural. language subset is parsed by a semantic gnLmmar using the LIFER system [Hcndrix. 1977]. Its output is a special version of the SODA relational language [Moore, 1979] that has been modified by Jim [)avidson to inchlde the standard database update operations "delete", "insert" ,and "replace". The parsed request is then passed to a routine that generates the candidate updates, subject to the constraints outlined above. This list is then evaluated and ranked as described in the previous section. If no updates are possible, the user is alerted to this fact If one alternative is superior, it is carried out. If several updates remain which cannot be compared, they arc presented for selection in terms of the effects they will have on the user's view of the database. If the update ultimately performed has unanticipated effects on the user's view (i.e. if the answer to a previous query is now altered), the user is informed. The example below concerns a small database of information about employees, managers and departments. It is assumed that the user view of the world contains employees and managers, but that s/he does not necessurily know about department~ in the database, managers manage employees "transitively", by managing the departments in which the employees work. For pu~ of presentation, intermediate results are displayed here to illustrate the program's actions. Normally, such information would not be printed. Commentary is enclosed in brackets("[ ]"). [Here is a tabular display ofthe database.] TABLE OH OEPT MGR INVNTRY FISHER MKTZNG BAKER SALES JONES TABLE ED EMP DEPT ADAMS SALES WHITE MKTING BROWN SALES SMITH INVNTRY [Fist the user ente~ the following query, from which the program in~rs the user's view ofthc world.] Enter next command: (LIST THE EMPLOYEES AND THEIR MANAGERS) EMP M6R AOAHS JONES WHITE BAKER BROWN JONES SMITH FISHER []Next the user enters a natural language update request.] Enter next command: (CHANGE BROWN'S MANAGER FROM JONES TO BAKER] [The program now generates the candidate updates. One of these corresponds to moving Brown from the S~es department to the Marketing departmenL The other would make Baker the manager of the S~es departmenL] The posstble ways of performing the update: 1. In the ralatton ED change the OEPT ettr of the tuple ENP OEPT . . . . . . . . . . . . . . . . . . . BROMN SALES to the value MKTZNG 140 2. In the Palatton DM change the MGR attr of the tuple OPT t~R SALES JONES to the value BAKER [The side effect of each on the user's view are computed.] These translations have the following stde effecta on the vtew: 1. Side effects are: Deletions: NIL Insertions: NIL Replacements: NIL 2. Stde effects era: Deletions: NIL Inssrtlons: NIL Replacements: (ADAMS JONES) -> (ADAMS BAKER) ['The prog~m concludes that update (1) is superior to (2). since (2) has the addiuonal side effect of changing Adams' manager to Baker as well.] Oestred trsnslatlon ts: 1. Rev'~od vtew ls: EMP MGR ADAMS JONES WHITE BAKER BROWN BAKER SMITH F!SHER 5. Conclusions Carrying out a database update request expressed in natural language requires that an intelligent decision be made as to how the update should be accomplished. Correctly identifying "reasonable" resultant states of the database, and selecting a best one among these, may involve world knowledge, domain knowledge, the user's goals and view of the database, and the previous discourse. In short, it is a typical problem in computational linguistics. Most of the compli~tions derive from the fact that the user has a view of the database that may be a simplification, subset, or transformation of the actual database structure and contenL Consequently, there may be multiple ways of carrying out the update on the underlying database (or no ways at all), which.are transparent to the user. While most or all of these changes to the underlying database may literally fulfill the user's request, they may have unanticipated or undesirable side-effecm on the database or the user's view. We have developed an approach to this problem that uses domain- independent heuristics to rank a set of candidate updates generated from the original requesL A reasonable course of action can then be selected, and carried out This may involve informing the user that the update is ill-advised (if" it cannot be carried out). presenting incomparable alternatives to the user for selection, or simply performing one of the possible updates. Ot, r technique is motivated by linguistic observations about the nature of update requests. Specifically, the use of referential opacity, and (he interpretation of counterfactual conditionals, play a role in our design. A primary advantage of our approach is that it does not require special knowledge about the domain, except that which is implicit in the structure and content of the database. A simple but adequate model of the user's view of the database is derived by tracking the previous dialog, and the heuristics are based on general principles about the nature of possible worlds, and so can be applied to any domain. Consequendy, the approach is practical in the sense that it can be transported to new databases without modification. In part because of ils generality, there is a definite risk (hat the technique will make inappropriate actions or fail to notice preferable options. A more knowledge-based approach would likely yield more accurate and sophisticated results. The proees of responding appropriately to updates could be improved by taking advantage of domain specific knowledge external to the database, using pan~ case- structure semantics, or tracking dialog focus, to name a few. In addition, better heuristics for ranking candidate updates would be likely to enhance performance. At present, we arc developing a formal characterization of the process of performing updates to views. We hope that this will provide us with a tool to improve our understanding of both the problem and the approach we have taken. While the heuristics used in the process are motivated by intuition, there is no obvious reason to assume that they are either optimal or complete. A more formal analysis of the problem may provide a basis for relating the various heuristics and suggest additional ranking criteria. 6. Bibliography Dayal. U.: Mapping Problems in Database Systems, TR-11-79, Center for Research in Computing Technology, Harvard University, 19"/9. Hendrix, G.: Human Engineering for Applied Natural Language Processing. Proceedings of the Fifth lnzernational Joint Conference on Artificial Intelligence, 1977,183-19L Kaplan. S. J.: Indirect Responses to Loaded Questions, Proceedings of lhe Second Workshop on Theoretical ls~ues in Natural Language Procexsing, Urbana-Champalgn, IL, July. 1978. Lewis, D.: Counterfactual$, Harvard University Press, Cambridge, MA, 1973. Moore, R.: Handling Complex Queries in a Distributed Da~ Base, TN-170. AI Center. SRI International, October, 1979. Nash-Webber. B.: Semantic Interpretation Revuited, BBN report #3335, Bolt, Beranek. and Newman, Cambridge, MA, 1976. Quine" w.v.o.: Reference and Modality, in Reference andModaliO,, Leonard Linsky. Ed., Oxford, Oxford University Press, 197L Teitelman, W.: lntedisp Reference Manual, Xerox PARC. Pale Alto, 1978. Wiederhold. G. and R. EI-Masri: The Structural Model for Database Design, Proceedings of the International Conference on Entity" Relationship Approach to Sy$lems Analysis and Design. North Holland Press, 1979. pp 247-267. 141
1981
31
Dynamic Strategy Selection in Flexible Parsing Jaime G. Carbonell and Philip J. Hayes Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Robust natural language interpretation requires strong semantic domain models, "fall-soff" recovery heuristics, and very flexible control structures. Although single-strategy parsers have met with a measure of success, a multi.strategy approach is shown to provide a much higher degree of flexibility, redundancy, and ability to bring task-specific domain knowledge (in addition to general linguistic knowledge) to bear on both grammatical and ungrammatical input. A parsing algorithm is presented that integrates several different parsing strategies, with case-frame instantiation dominating. Each of these parsing strategies exploits different types of knowledge; and their combination provides a strong framework in which to process conjunctions, fragmentary input, and ungrammatical structures, as well as less exotic, grammatically correct input. Several specific heuristics for handling ungrammatical input are presented within this multi-strategy framework. 1. Introduction When people use language spontaneously, they o~ten do not respect grammatical niceties. Instead of producing sequences of grammatically well-formed and complete sentences, they often miss out or repeat words or phrases, break off what they are .saying and rephrase or replace it, speak in fragments, or use otherwise incorrect grammar. While other people generally have little trouble co'reprehending ungrammatical utterances, most' natural language computer systems are unable to process errorful input at all. Such inflexibility in parsing is a serious impediment to the use of natural language in interactive computer systems. Accordingly, we [6] and other researchers including Wemchedel and Black [14], and Kwasny and Sondhelmer [9], have attempted to produce flexible parsers, i.e. parsers that can accept ungrammatical input, correcting the errors whan possible, and generating several alternative interpretations if appropriate. While different in many ways, all these approaches to flexible parsing operate by applying a uniform parsing process to a uniformly represented grammar. Because of the linguistic performance problems involved, this uniform procedure cannot be as simple and elegant as the procedures followed by parsers based on a pure linguistic competence model, such as Parsifal [10]. Indeed, their parsing procedures may involve several strategies that are applied in a predetermined order when the input deviates from the grammar, but the choice of strategy never depends on the specific type of construction being parsed. In light of experience with our own flexible parser, we have come to believe that such uniformity is not conducive to good flexible parsing. Rather, the strategies used should be dynamically selected according to the type of construction being parsed. For instance, partial.linear pattern matching may be well suited to the flexible parsing of idiomatic phrases, or specialized noun phrases such as names, dates, or addresses (see also [5]), but case constructions, such as noun phrases with trailing prepositional phrases, or imperative phrases, require case-oriented parsing strategies. The undedying principle is simple: The ap~rol~riate knowledge must be brought to bear at the right time -- and it must not interfere at other times. Though the initial motivation for this approach sprang from the r~eeds of flexible parsing, such construction.specific techniques can provide important benefits even when no grammatical deviations are encountered, as we will show. This observation may be related to the current absence of any single universal parsing strategy capable of exploiting all knowledge sources (although ELI [12] and its offspring [2] are efforts in this direction). Our objective here is not to create the ultimate parser, but to build a very flexible and robust taak.oriented parser capable of exploiting all relevant domain knowledge as well as more general syntax and semantics. The initial application domain for the parser is the central component of an interface to various computer subsystems (or tools). This interface and, therefore the parser, should be adaptable to new tools by substituting domain-specific data bases (called "tool descriptions") that govern the behaviorof the interface, including the invocation of parsing strategies, dictionanes and concepts, rather than requiring any domain adaptations by the interface system itself. With these goals in mind, we proceed to give details of the kinds of difficulties that a uniform parsing strategy can lead to, and show how dynamically-selected construction.specific techniques can help. We list a number of such specific strategies, then we focus on our initial implementation of two of these strategies and the mechanism that dynamically selects between them while pm'alng task-oriented natural language imperative constructions. Imperatives were chosen largely because commands and queries given to a task-oriented natural language front end often take that form [6]. 2. Problems with a Uniform Parsing Strategy Our present flexible parser, which we call RexP, is intended to parse correctly input that correaponds to a fixed grammar, and also to deal with input that deviates from that grammar by erring along certain classes of common ungrammaticalities. Because of these goals, the parser is based on the combination of two uniform parsing strategies: bottom-up parsing and pattern.matching. The choice of a bottom.up rather then a top-down strategy was based on our need to recognize isolated sentence fragments, rather than complete sentences, and to detect restarts and continuations after interjections. However, since completely bottom-up strategies lead to the consideration of an unnecessary number of alternatives in correct input, the algorithm used allowed some of the economies of top-dOwn parsing for non-deviant input. Technically speaking, this made the parser left-corner rather than bottom-up. We chose to use a grammar of linear patterns rather than, say, a transition network because pattern.matching meshes well with bottom-up parsing by allowing lookup of a pattern from the presence in the input of any of its constituents; because pattern-matching facilitates recognition of utterances with omissions and substitutions when patterns are recognized on the basis of partial matches; and because pattern. matching is necessary for the recognition of idiomatic phrases. More details of the iustifications for these choices can be found in [6]. 1This research was .sponsored in Dart by the Defense Advanced R~Jeerc~ Promcts Agency (DE)O), ARPA Ck'1:ler NO. 35S7. momtored by the Air Force Avmntcs Laboratory un0er contract F33615.78-C-1551. anti in part by the Air Force Office o( °-,'mntifi¢ Research under Contract F49620-79-C-0143. The views aria cor, clusm.s ¢ontmneO in this document are those Of the authors and shou~ not be inte.rDreleo as tepreser~ting the official DOhCle~ ¢qther exl)resse0 or ,replied. o! DARPA, Ihe Air Force Office ol Scisn,fic Research or the US government. FlexP has been tested extensively in conjunction with a gracefully interacting interface to an electronic mail system [1]. "Gracefully interacting" means that the interface appears friendly, supportive, and robust to its user. In particular, graceful interaction requires the system to tolerate minor input errors and typos, so a flexible parser is an imbortant component of such an interface. While FlexP performed this task adeduately, the experience turned up some problems related to the 143 major theme of this paper. These problems are all derived from the incomparability between the uniform nature of The grammar representation and the kinds of flexible parsing strategies required to deal with the inherently non-uniform nature of some language constructions. In particular:. •Oifferent elements in the pattern of a single grammar rule can serve raclically different functions and/or exhibit different ease of recognition. Hence, an efficient parsing strategy should react to their apparent absence, for instance, in quite different ways. • The representation of a single unified construction at the language level may require several linear patterns at the grammar level, making it impossible to treat that construction • with the integrity required for adecluate flexible parsing. The second problem is directly related to the use of a pattern-matching grammar, but the first would arise with any uniformly represented grammar applied by a uniform parsing strategy. For our application, these problems manifested themselves most markedly by the presence of case constructions in the input language. Thus. our examples and solution methOds will be in terms of integrating case-frame instantiat=on with other parsing strategies. Consider, for example, the following noun phrase with a typical postnominal case frame: "the messages from Smith aDout ADA pragmas dated later than Saturday". The phrase has three cases marked by "from", "about", and "dated later than". This Wpe of phrase is actually used in FlexP's current grammar, and the basic pattern used to recognize descriptions of messages is: <?determiner eMassageAd,1 ~4essagoHoad •NOlsageC8$o) which says that a message description iS an optional (?) determiner. followed by an arbitrary number (') of message adjectives followed by a message head word (i.e. a word meaning "r~essage"). followed by an arbitrary number of message cases, in the example. "the" is the determiner, there are no message adjectives. "messages" is the message head word. and there are three message cases: "from Smith". • 'about ADA pragmas", end "dated later than". (~=cause each case has more than one component, each must be recognized by a separate pattern: <',Cf tom I~erson> <~'.abou t Subject> <~,s tnce Data> Here % means anything in the same word class, "dated later than", for instance, is eauivalent to "since" for this purpOSe. These patterns for message descr~tions illustrate the two problems mentioned above: the elementS of the .case patterns have radically different functions - The first elements are case markers, and the second elements are the actual subconcepts for the case. Since case indicators are typically much more restriCted in expression, and therefore much easier to recognize than Their corresponding subconc~ts, a plausible strategy for a parser that "knows" about case constructions is to scan input for the case indicators, and then parse the associated subconcepts top-down. This strategy is particularly valuable if one of the subconcepts is malformed or of uncertain form, such as the subject case in our example. Neither "ADA" nor "pragmas" is likely to be in the vocabulary of our system, so the only way the end of the subject field can be detected is by the presence of the case indicator "from" which follows iL However, the present parser cannot distinguish case indicators from case fillers - both are just elements in a pattern with exactly the same computational status, and hence it cannot use this strategy. The next section describes an algorithm for flexibly parsing case constructions. At the moment, the algorithm works only on a mixture of case constructions and linear patterns, but eventually we envisage a number of specific parsing algorithms, one for each of a number of construction types, all working together to provide a more complete flexible parser. Below, we list a number of the parsing strategies that we envisage might be used. Most of these strategies exploit the constrained task.- oriented nature of the input language: • Case-Frame Instantiation is necessary to parse general imperative constructs and noun phrases with posThominal modifiers. This method has been applied before with some success to linguistic or conceptual cases [12] in more general parsing tasks. However, it becomes much more powerful and robust if domain-dependent constraints among the cases can be exploited. For instance, in a file- management system, the command "Transfer UPDATE.FOR to the accounts directory" can be easily parsed if the information in the unmarked case of transfer ("ulXlate.for" in our example) is parsed by a file-name expert, and the destination case (flagged by "to") is parsed not as a physical location, but a logical entity ins=de a machine. The latter constraint enables one to interpret "directory" not as a phonebook or bureaucratic agency, but as a reasonable destination for a file in a computer. • Semantic Grammars [8] prove useful when there are ways of hierarchically clustering domain concepts into functionally useful categories for user interaction. Semantic grammars, like case systems, can bring domain knowledge to bear in dissmbiguatmg word meaningS. However, the central problem of semantic grammars is non-transferability to other domains, stemming from the specificity of the semantic categorization hierarchy built into the grammar rules. This problem is somewhat ameliorated if this technique is applied only tO parsing selected individual phrases [13], rather than being res0onsible for the entire parse. Individual constituents, such as those recognizing the initial segment of factual queries, apply in may domains, whereas a constituent recognizing a clause about file transfer is totally domain specific. Of course, This restriction" calls for a different parsing strategy at the clause and sentence level. • (Partial) Pattern Matching on strings, using non.terminal semantic.grammar constituents in the patterns, proves to be an interesting generalization of semantic grammars. This method is particularly useful when the patterns and semantic grammar non-terminal nodes interleave in a hierarchical fashion. e Transformations to Canonical Form prove useful both for domain-dependent and domain.independent constructs. For instance, the following rule transforms possessives into "of" phrases, which we chose as canonical: ['<ATTRZBUTE> tn possessive form. <VALUE> lagltfmate for attribute] -> [<VALUE> "OF" <ATTRZBUTE> In stipple forll] Hence, the parser need only consider "of" constructions ("file's destination" => "destinaUon of file"). These transforms simplify the pattern matcher and semantic grammar application process, especially when transformed constructions occur in many different contextS. A rudimentary form of string transformation was present in PARRY [11 ]. e Target-specific methods may be invoked to portions of sentences not easdy handlecl by The more general methods. For instance, if a case-grammar determines that the case just s=gnaled is a proper name, a special name- expert strategy may be called. This expe~ knows that nantes 144 can contain unknown words (e.g., Mr. Joe Gallen D'Aguila is obviously a name with D'Aguila as the surname) but subject to ordering constraints and morphological preferences. When unknown words are encountered in other positions in a sentence, the parser may try morphological decomposition, spelling correction, querying the user, or more complex processes to induce the probable meaning of unknown words, such as the project-and-integrate technique described in [3]. Clearly these unknown.word strategies ought to be suppressed in parsing person names. 3. A Case-Oriented Parsing Strategy As part of our investigations in tosk-oriented parsing, we have implemented (in edditio,n to FlexP) a pure case-frame parser exploiting domain-specific case constraints stored in a declarative data structure, and a combination pattern-match, semantic grammar, canonical- transform parser, All three parsers have exhibited a measure of success, but more interestingly, the strengths of one method appear to overlap with the weaknesses of a different method. Hence, we are working towards a single parser that dynamically selects its parsing strategy to suit the task demands. Our new parser is designed primarily for task domains where the prevalent forms of user input are commands and queries, both expressed in imperative or pseudo-imperative constructs. Since in imperative constructs the initial word (or phrase), establishes the case.frame for the entire utterance, we chose the case-frame parsing strategy as priman/. In order to recognize an imperative command, and to instantiate each case, other parsing strategies are invoked. Since the parser knows what can fill.a particular case, it can choosethe parsing strategy best suited for linguistic constructions expressing that type of information. Moreover, it can pass any global constraints from the case frame or from other instantiated cases to the subsidiary parsers . thus reducing potential ambiguity, speeding the parse, and enhancing robustness. Consider our multi-strategy parsing algorithm as described below. Input is assumed to be in the imperative form: 1. Apply string PATTERN-MATCH to the initial segment of the input using only the patterns previously indexed as corresponding to command words/phrases in imperative constructions. Patterns contain both optional constituents and non.terminal symbols that expand according to a semantic grammar. (E.g., "copy" and "do a file transfer" are synonyms for the same command in a file management system.) 2. Access the CASE.FRAME associated with the command just recognized, and push it onto the context stack. In the above example, the case.frame is indexed under the token <COPY),, which was output by the pattern matcller, The case frame consists of list of pairs ([case.marker] [case-filler. information[, ...). 3. Match the input with the case rharkers using the PATTERN- MATCH system descriOecl above." If no match occurs, assume the input corresponds to the unmarked case (or the first unmarked case, if more than one is present), and proceed to the next step. 4. Apply the Darsin(7 strategy indicated by the type of construct expected as a case filler. Pass any available case constraints to the suO-f~arser. A partial list of parsing strategies indicated by expected fillers is: • Sub-imperative -- Case.frame parser, starting with the command-identification pattern match above. • Structured-object (e.g., a concept with subattributes) .- Case-frame parser, starting with the pattern-marcher invoked on the list of patterns corresponding to the names (or compound names) of the semantically permissible structured objects, followed by case-frame parsing of any present subattributes. • Simple Object .- Apply the pattern matcher, using only the patterns indexed as relevant in the case-filler- information field. Special Object -- Apply the .parsing strategy applicable to that type of special object (e.g., proper names, dates, quoted strings, stylized technical jargon, etc...) None of the above -- (Errorful input or parser deficiency) Apply the graceful recovery techniques discussed below. 5. If an embedded case frame is. activated, push it onto the context stack. 6. When a case filler is instantiated, remove the <case.marker), <case-filler-information> pair from the list of active cases in the appropriate case frame, proceed to the next case- marker, and repeat the process above until the input terminates. 7, ff all the cases in a case frame have been instantiated, pop the context stack until that case frame is no longer in it. (Completed frames typically re~de at the top of the stack.) 8. If there is more than One case frame on the stack when trying to parse additional inpuL apply the following procedure: • If the input only matches a case marker in one frame, proceed to instantiste the corresponding case-filler as outlined above. Also, if the matched c8~e marker is not on the most embedded case frame (i.e., at the top of the context stack), pop the stack until the frame whose case marker was matched appears at the top of the stack. • If no case markers are matched, attempt to parse unmarked cases, starting with the most deeoly embedded case frame (the top of the context stack) and proceeding outwards. If one is matched, pop the context stack until the corresponding case frame is at the top. Then, instantiats the case filler, remove the case from the active case frame, and proceed tO parse additional input. If more then one unmarked case matches the input, choose the most embedded one (i.e., the most recent context) and save the stats of the parse on the global history stack. (This soggeat '= an ambiguity that cannot be resolved with the information at hand.) • If the input matches more than one case marker in the context stack, try to parse the case filler via the indexed parsing strategy for each filler.information slot corresponding to a matched case marker. If more then one case filler parses (this is somewhat rare sJtustion - indicating underconstrained case frames or truly ambiguous input) save the stats in the global history stack arid pursue the parse assuming the mOst deeply embeded constituent, [Our case.frame attachment heuristic favors the most }ocal attachment permitted by semantic case constraints.] 145 g. If a conjunction or disjunction occurs in the input, cycle through the context stack trying to parse the right-hand side of the conjunction as filling the same case as the left hand side. If no such parse is feasible, interpret the conjunction as top-level, e.g, as two instances of the same imperative, or two different imperatives, ff more than one parse results, interact with the user to disaml~iguate. To illustrate this simple process, consider. "Transfer the programs written by Smith and Jones to ..." "Transfer the programs written in Fortran and the census data files to ..." "Transfer the prOgrams written in Fortran and delete ..." The scope of the first conjunction is the "author" subattribute of program, whereas the scope of the second coniunction is the unmarked "obieot" case of the thrustor action. Domain knowledge in the case-filler information of the "ob)ect" case in the "transfer" imperative inhibits "Jones" from matching a potential object for electronic file transfer, Similarly "Census data files" are inhibited from matching the "author" subattribute of a prOgram. Thus conjunctions in the two syntactically comparable examples are scoped differently by our semantic-scoping rule relying on domain-specific case information. "Delete" matches no active case filler, and hence it is parsed as the initial Segment Of a second conjoined utterance. Since "delete" is a known imperative, this parse succeeds. 10. If the Darser fails to Darse additional input, pop the global history stack and pursue an alternate parse. If the stack is empty, invoke the graceful recovery heuristics. Here the DELTA-MIN method [4] can be applied to improve upon depth.first unwinding of the stack in the backtracking pro,:_ _~,s_l__ 11. If the end of the input is reached, and the global hiMo;y stack is not empty, pursue the alternate parses. If any survive to the end of the input (this should hot be the case unless true amt~iguity exists), interact with the user to select the appropriate parse (see [7).] The need for embeded case structures and ambiguity resolution based on domain-dependent semantic expectations of the case fillers is illustrated by the following paJr of sentences: "Edit the Drograms in Forlran" "Edit the programs in Teco" "Fortran" fills the language attribute of "prOgram", but cannot fill either the location or instrument case of Edit (both of which can be signa~d by "in"). In the second sentence, however, "Teed" fills the instrument case of the veYO "edit" and none of the attributes of "program". This disembiguation is significant because in the first example the user specified which programs (s)he wants to edit, whereas in the second example (s)he specified how (s)he wants to edit them. The algorithm Drseented is sufficient to parse grammatical input. In addition, since it oper-,tes in a manner specifically tailored to case constructions, it is easy to add medifications dealing with deviant input. Currently, the algorithm includes the following steps that deal with ungrammaticality: 12. If step 4 fails. Le. a filler of appropriate type cannot be parsed at that position in the inDut, then repeat step 3 at successive points in the input until it produces'a match, and continue the regular algorithm from there. Save all words not matched on a SKIPPED list. This step tal~es advantage of the fact that case markers are often much easier to recognize than case fillers to realign the parser if it gets out of step with the input (because of unexpected interjections, or other spurious or missing won:is). 13. It wor(ls are on SKIPPED at the end of the parse, and cases remain unfilled in the case frames that were on the context Mack at the time the words were skipped, then try tO parse each of the case fillers against successive positions of the skipped sequences. This step picks up cases for which the masker was incorrect or gadoled. 14. if worOs are Mill on SKIPPED attempt the same matches, but relax the pstlern matching procedures involved. 15. If this still does not account for all the input, interact with the user by asking cluestions focussed on the uninterprsted Dart of the input. The same focussed interaction techniclue (discussed in [7]) is used to resolve semantic ambiguities in the inpuL 16. If user intersction proves impractical, apply the project-and- integrate method [3] to narrow down the meanings of unknown words by exploiting syntactic, semantic and contextual cues. These flexible paring steps rely on the construction-specific 8SDe¢~ of the basic algorithm, and would not be easy to emulate in either a syntactic ATN parser or one based on a gum semantic gnlmmer. A further advantage of our rnixed.stnl~ approach is that the top. level case structure, in es~mce, partitions the semantic world dynamically into categories according to the semanbc constraints On the active case fillers. Thus, when a pattern matcfler is invoked to parle the recipient case of a file-transfer case frlmle, it need Only consider I::~terns (and semantc.gramrnm" constructs) that correspond to logical locations insole a computer. This form Of eXl~"ts~n-drMm I~u~ing in restricted domains adds a two-fold effect to its rcbusmes¢ • Many smmous parses are .ever generatod (bemnmo patterns yielding petentisfly spurious matches are never in inappropriate contexts,) • Additional knowledge (such as additional ~ grammar rules, etc.) can be added without a corresponding linear inc~ in parso time since the coes.frames focus only upon the relevant sul3sat of patterns and rules. Th. Ink the efficiency of the system may actually inormme with the addition of more domain knowledge (in effect shebang the case fnmmes to further rssmct comext). Thle pehm~ior ~ it Do.ibis to incrementally build the ~ wWtout the ever- present fesr theta new extension may mal~ ltm entire pemer fail due to 8n unexl:)ected application of that extension in the wrong context. In closing, we note that the algorithm ~ above does not mer~ion interaction with morphotogicai de¢ompoaltion or 81:XMllng correction. LexicaJ processing is particularly important for robust Parsing; indeed, based On our limited eXl::~rienca, lexicaJ-level errcra m'e a significant source of deviant input. The recognition and handling of lexical-deviation phenomena, such as abbreviations and mies~Hlings, must be integrated with the more usual morDhotogical analySbl. Some of these topics are discussed indeoendently in [6], However, intl.'prig resilient morDhologicaJ analysis with the algorithm we have outlined is a problem we consider very important and urgent if we are to construct • practical flexible parser. 4. Conclusion To summarize, uniform i~mng procedures applied to uniform grammars are less than adeduate for paring ungrammatical inpuL As our experience with such an approach s~ows, the uniform methods are unable to take full advantage of domain knowledge, differing structurW roles (e.g,, case markers and. case fillers), and relative eese of identification among the various constituents in different types of 146 constrl, ctions. Instead, we advocate integrating a number of different parSing strategies tailored to each type of construction as dictated by the ¢oplication domain. The parser should dynamically select parsing strategies according to what type of construction it expects in the course of the parse. We described a simple algorithm designed along these lines that makes dynamic choices between two parsing strategies, one designed for case constructions and the other for linear patterns. While this dynamic selection coproach was suggested by the needs of flexible parSing, it also seemed to give our trial implementation significant efficiency advantages over single-strategy approaches for grammatical input. 5. References 1. Ball, J. E. and Hayes, P.J. Representation of Task-Independent Knowledge in a Gracefully Interacting User Interface. Pro¢. 1st Annual Meeting of the American Association for Artificial Intelligence, American Assoc. for Artificial Intelligence, Stanford University, August, 1980, pp. 116-120. 2. Birnbaum, L and Selfridge, M. Conceptual Analysis in Natural Language. In Inside Computer Understanding, R. Schank and C. Riesbeck, Eds., New ~lersey: Edbaum Assoc., 1980, pp. 318-353, 3. Carbonell, J. G. Towards a Self.Extending Parser. Proceedings of the 17th Meeting of the Association for Computational Linguistics, ACL- 79, 1979, pp. 3-7. 4. Carbonell, J. G. A.MIN: A Search-Control Method for Information- Gathering Problems. Proceedings of the First AAAI Conference, AAAI. 80, August, 1980. • 5. GerShman, A. V. Knowledge.Beset/Parsing. Ph.D. Th., Yale University, April 1979. Computer Sci. Dept. report # 156 6. Hayes, P. J. and Mouradian, G. V. Rexible Parsing. Proc. of 18th Annual Meeting of the Assoc. for Comput. Ling., Philadelphia, June, 1980, pp. 97.103. 7. Hayes P. J. Focused Interaction in Fiexible Parsing. Carnegie.Mellon UniverSity Computer Science Department, 1981. 8. Hendrix, G. G., Sacerdoti, E. D. and Slocum, J. Developing a Natural Language Interface to Complex Data. Tech. Rept. Artificial Intelligence Center., SRI International, 1976. 9. Kwasny, S. C. and Sondheimer, N. K. Ungrammaticality and Extra- Grammaticality in Natural Language Understanding Systems. Proc. of 17th Annual Meeting of the Assoc. for Comput. Ling., La Jolla, Ca., August, 1979, PP. 19-23. 10. Marcus, M. A.. A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, Mass., 1980. 1 1. Parkison. R. C., Colby, K. M., and Faught, W. S. "Conversational Language Comprehension Using Integrated Pattern.Matching and Parsing." Artificia/Intelligence 9 (1977), 111-134. 12. Riesbeck. C. and Schank. R, C. Comprehension by Computer:. Exl:ectation.aased Analysis of Sentences in Context. Tech. Rept. 78, Computer Science Department, Yale University, 1976: 13. Waltz, D. L. and Goodman. A. B. Writing a Natural Language Oats Base System. IJCAIVproc, IJCAI-77, 1977, pp. 144-150. 14. We~schedel, R. M. and Black, J. Responding to Potentially Unl:arseable Serttences, Tech. Rept. 79/3, Dept. of Computer and Information Sciences, UniverSity of Delaware, 1979. 147
1981
32
A Construction-Specific Approach to Focused Interaction in Flexible Parsing Philip J. Hayes Carnegie-Mellon University Pittsburgh, PA 15213 Abstract ~ A flexible parser can deal with input that deviates from its grammar, in addition to input that conforms to it. Ideally, such a parser will correct the deviant input: sometimes, it will be unable to correct it at all; at other times, correction will be possible, but only to within a range of ambiguous possJbilities. This paper is concerned with such ambiguous situations, and with making it as easy as possible for the ambiguity to be resolved through consultation with the user of the parser - we presume interactive use. We show the importance of asking the user for clarification in as focused a way as possible. Focused interaction of this kind is facilitated by a construction. specific approach to flexible parsing, with specialized parsing techniques for each type of construction, and specialized ambiguity representations for each type of ambiguity that a particular construction can give rise to. A construction-specific approach also aids in task-specific language development by allowing a language definibon that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism, thus greatly speeding the testing of changes to the language definition. 1. Introduction There has been considerable interest recently in the topic of flexible parsing, i.e. the parsing of input that deviates to a greater or lesser extent from the grammar expected by the parsing system. This iriterest springs from very practical concerns with the increamng use of natural language in computer interfaces. When people attempt to use such interfaces, they cannot be expected always to conform strictly to the interfece's grammar, no matter how loose and accomodating that grammar may be. Whenever people spontaneously use a language, whether natural or artificial, it is inevitable that they will make errors of performance. Accordingly, we [3] and other researchers including Weischedel and Black [6], and Kwasny and Sondheimer [5], have constructed flexible parsers which accept ungrammatical input, correcting the errors whenever possible, generating several alternative interpretations if more than one correction is plausible, and in cases where the input cannot be massaged into lull grammaticality, producing as complete a partial parse as possible. If a flexible parser being used as part of an interactive system cannot correct ungrammatical input with total, certainty, then the system user must be involved in the resolution of the difficulty or the confirmation of the parser's Correction. The approach taken by Weischedel and Black [6] in such situations is to inform the user about the nature of the difficulty, in the expectation that he will be able to use this information to produce a more acceptable input next time, but this can involve the user in substantial retyping. A related technique, adopted by the COOP system [4], is to paraphrase back tO the user the one or more parses that the system has produced from the user!s input, and to allow the user to confirm the parse or select one of the ambiguous alternatives, This approach still means a certain amount of work for the user. He must check the paraphrase to see if the system has interpreted what he said correctly and without omission, and in the case of ambiguity, he must compare the several paraphrases to see which most ClOsely corresponds 1This i'e~earch ~ =k~oneoreO by the Air Force Office Of Scientific ReseMch url~" Contract F49620-79.C-0143, The views anO conclusions contained in this document thOSe Of the author and sttould not be interpreted a.s representing [he olficial policies, eJther exl~'e~e¢l or =mDlieO. ol the Air Force Ollice of Scicmlifi¢ Researcll or the US Government to what he meant, a non-trivial task if the input is lengthy and the differences small. Experience with our own flexible parser suggests that the way requests for clarification in such situations are phrased makes a big difference to the ease and accuracy with which the user can correct his errors, and that the user is most helped by a request which focuses as tightly as possible on the exact source and nature of the difficulty. Accordingly, we have adopted the following simple principle for the new flexible parser we are presently constructing: when the parser cannot uniquely resolve a problem in its input, it should as/( the user for a correction in as direct and focused a manner as l~ossible. Furthermore, this request for clarification should not prejudice the processing of the rest of the input, either before or after the problem occurs, in other words, if the system cannot parse one segment of the input, it should be able to bypass it, parse the remainder, and then ask the user to restate that and only that segment of the input. Or again, if a small part of the input' is missing or garbled and there are a limited number of possibilities for what ought to be there, the parser should be able to indicate the list of possibilities together with the context from which the information is missing rather than making the user compare several complete paraphrases of the input that differ only slightly. In what follows, we examine some of the implications of these ideas. We restrict our attention to cases in which a flexible parser can correct an input error or ungrammaticaUty, but only to within a constrained set of alternatives. We consider how to produce a focused ambiguity resolution request for the user to distinguish between such a set of corrections. We conclude that: • the problem must be tackled on a construction.specific basis, • and special representations must be devised for all the structural ambiguities that each construction type can give rise to. We illustrate these arguments with examples involving case constructions. There are additional independent reasons for adopting a construction,specific approach to flexible parsing, including increased efficiency and accuracy in correcting ungrammaticality, increased efficiency in parsing grammatical input, and ease of task.specific language definition. The first two of these are discussed in [2], and this paper gives details of the third. 2. Construction-Specific Ambiguity Representations In this section we report on experience with our earlier flexible parser, RexP [3], and show why it is ill.suited to the generation of focused requests to its user for the resolution of input ambiguities. We propose solutions to the problems with FlexP. We have already incorporated these improvements into an initial version of a new flexible parser [2]. The following input is typical for an electronic mail system interface [1] with which FlexP was extensively used: the messages from Frecl Smith that atrivecl after don 5 The fact that this is not a complete sentence in FlexP's grammar causes no problem. The only real difficulty comes from *'Jon", which should presumably be either "Jun" or "Jan". FlexP's spelling corrector can come to the same conclusion, so the output contains two complete 149 parses which are passed onto the next stage of the mail system interface. The first of these parses looks like: [Descript'ionOf : Message Sender: [OescriptionOf: Person F i rstName: Fred Surname: smith ] AfterOate: [DesoriptionO?: Date Month: january OayOfMonth : 5 ] ] This schematized property list style of representation should be interpreted in the obvious way, FlexP operates by bottom.up pattern matching of a semanttc grammar of rewrite rules which allOwS it tO parse directly into this form of representation, which is the form required by the next phase of the interface. if the next stage has access to other contextual information which allows it conclude that one or other of these parses was what was intended, then it can procede to fulfill the user's request. Otherwise it has little choice but to ask a Question involving paraphrases of each of the amDiguous interpretations, such as: Do you mean: t. the messages from Fred Smith that arrived after January 5 2. the messages from Fred Smith that arrived after June 5 Because it is not focused on the source of the error, this Question gives the user very little held in seeing where the problem with his input actually lies• Furthermore. the systems representation of the ambiguity as several complete parses gives Jt very little help in understanding a response of "June" from the user, a very natural.and likely one in the circumstances. In essence, the parser has thrown away the information on the specific source of the ambiguity that it once had. and would again need to deal adequately with that response from the user. The recovery of this lost information would require a complicated (if done in a general manner) comparison between the two complete parses, One straightforward solut=on tO the problem is to augment the output language with a special ambiguity representation. The output from our example might look like: i'Desc rip~.ion0f : Message Sender: [OescrigtionOf: Person FirstName: fred Surname: smith ] AfterOate: [OescriptionOf: Date Month: [OescriptionOf: Ambigu.itySet Choices: (january june) ] OayOfMonth: 5 ] ] This representation is exactly like the one above except that the Month slot is tilled by an AmbiguitySet record. This record allows the ambiguity between january and june to be confined to the month slot where it belongs rather than expanding to an ambiguity of the entire input as in the first approach we discussed. By expressing the ambiguity set ssa disjunction, it would be straightforward to generate from this representation a much m_"re focused request for clarification such as: ,.30 you mean the messages from Fred Smith that arrived after January or June 5? A reply of "June" would also De much easier to deal with. However. this approach only works if the aml~iguity corresponds tO an entire slot filler. Suppose. for example, that inste,~d of mistyping the montl~, the user omitted or ,~o completely garbled the preposition "from" that the parser effectmvely saw: the messages Fred Smith that arrived after Jan 5 In the grammar used by FlexP for this particular application, the connexion between Fred Smith and the message could have been expressed (to within synonyms) only by "from", "to". or "copied to", FlexP can deal with this input, and correct it tO within this three way ambiguity. To represent the ambiguity, it generates three complete parses isomorphic to the first output example above, except that Sender is replaced by Recipient and CC in the second and third parses respectively. Again, this form of representation does not allow the System tO ask a focused question about the source of the ambiguity or interpret naturally elliptical replies to a request to distinguish between the three alternatives. The previous solution is not applicable because the ambiguity lies in the structure of the parser output rather than at one of its terminal nodes. Using a case notation, it is not permissible to gut an "AmbiguitySet" in place of one of the deep case markers. 2 To localize such ambiguities and avoid duplicate representation of unambiguous parts of the input, it is necessary to employ a representation like the one useO by our new flexible parser:. [Oescript tonOf: Message Aml3 i guousS1 ots: ( [PossJbleSlots: (Sender Recipient CC) SlotFiller: [DescriptionOf: Person FirstName: fred Surname: smith ] ] ) AfterOate: [De&cript ionOf : Date Month: january OayOfMonth: 5 ] ] This example parser output is similar to the two given previously, but instead of having a Sender slot, it has an AmbiguousSIots slot. The filler of this slot is a list of records, each of which specifies a SlotFiller and a list of PossibleSIots. The SIolFiller is a structure that would normally be • the filler of a slot in the top-level description (of a message in this case), but the parser has been unable to determine exactly which higher.level slot it shou#d fit into: the possibilities are given in PossibleSIots. With this representation, it is now straightforward to construct a directed question such as: Do you mean the messages from, to, or copied to Fred Smith that arrived after January 5? Such Questions can be generated by outputting AmbiguousSIot records as the disjunction (in boldface) of the normal case markers for each of the Poss=bleSlots followed by the normal translation of the SlotFiller. The main point here, however, does not concern the question generation mechanism, nor the exact deta, ls of the formalism for representing ambiguity, it is. rather, that a radical revision of the initial formalism was necassar~ in order tO represent structural ambiguities without duplicat=on of non-ambiguous material. The adoption of such representations for ambiguity has profound implications for the parsing strategies employed by any parser which tries to produce them. For each type of construction that such a parser can encounter, and here we mean construction types at the level of case construction, conjoined list, linear fixed-order pattern, the parser muSt "know" about ell the structural ambiguities that the construction can give rise to, and must be prepared to detect and encode appropriately such ambiguities when they arise. We have chosen tO achieve this by des=gnmg a number of different parsing strategies, one for each type of construction that will be encountered, and making the parser Switch 2Nor rs this DroDlem merely an arlifact of case r~otatlon, tt would arise in exaclty the sanle way for a stanttarcl syntactic parSe Of a serltence such as tile well known "1 Sew tile G=*&rl(3 Canyon flying to New York•" The ddhcully dr=see beCauSe the ami0mgu=ty ¢s structural, structural arnblt'JllJtleS c~n occur no ma~er ~nat form of structure rs crtosen. 150 between these strategies dynamically. Each such construction-specific parsing strategy encodes detailed information about the types of structural ambiguity possible with that construction and incorporates the specific information necessary to detect and represent these ambiguities. 3. Other Reasons for a Construction-Specific Approach There are additional independent reasons for adopting a construction- s~oecific approach to flexible parsing. Our initially motivating reason was that dynamically selected constructidn.specific parsing strategies can make corrections to erroneous input more accurately and efficiently than a uniform parsing procedure, it also turned out that such an approach provided significant advantages in the parsing of correct input as well. These points are covered in detail in [2]. A further advantage is related to language definition. Since, our initial flexible parser, FlexP, applied its uniform parsing strategy to a uniform grammar of pattern.matching rewrite rules, it was not possible to cover constructions like the one used in the examples above in a single grammar rule. A gostnominal case frame such as the one that covers the message descriptions used as examples above must be .spread over several rewrite rules. The patterns actually used in RexP look like: <?determiner "MessageAdj 14essageHead *MessageCase> <%from Person> <Y,s t nee Date> The first top.level pattern says that a message description is an optional (?) determiner, followed by an arbitrary number (') of message adjectives followed by a message head word (one meaning "message"), followed by an arbitrary number of message cases. Because each case has more than ont~ component, each must be recognized by a separate pattern like the second and third above. Here % means anything in the same word class, "that arrived after", for instance, is equivalent to "since" for this purpose. The point here is not the details of the pattern notation, but the fact that this is a very unnatural way of representing a postnominal case construction, Not only does it cause problems for a flexible parser, as explained in [2], but it is also quite inconvenient to create in the first place. Essentially, one has to know the specific trick of creating intermediate, and from the language point of view, superfluous categories like MeesageCase in the example above. Since, we designed FlexP as a tool for use in natural language interfaces, we considered it unreasonable to expect the designer of such a system to have the specialized knowledge to create such obscure rules. Accordingly, we designed a language definition formalism that enabled a grammar to be specified in terms much more natural to the system being interfaced to. The above construction for the description of a message, for instance, could be defined as a single unified construction without specifying any artificial intermediate constituents, as follows: [ StructureType: Object ObjectName: Message Schema: [ Sender: [FillerType: &Person] Recipient: [FillerType: &Person Number: OneOrMore] Date: [FJllerType: &Oats] After: [FJllerType: &Date UseRestrict ion: OescrJpt ionOnly] ] Syntax: [ SynType: NounPhrase Head: (message note <?piece ?of mail>) Case : ( <%from tSender> <~to ~Recipient> <%dated toots> <%since ~After> .- ) ] ] In addition to the syntax of a message description, this piece of formalism also describes the internal structure of a message, and is intended for use with a larger interface system [1] of which FlexP is a part. The larger system provides an interface to a functional subsystem or tool, and is tool-independent in the sense that it is driven by a declarative data base in which the objects and operations of the tool currently being interfaced to are defined in the formalism shown. The example is, in fact, an abbreviated version of the definition of a message from the declarative tool description for an electronic mail system tool with which, the interface was actually used. In the example, the Syntax slot defines the input syntax for a message; it is used to generate rules for RexP, which ere in turn used to parse input descriptions of messages from a user. FlexP's grammar to parse input for the mail system tool is the onion of all the rules compiled in this way from the Syntax fields of ell the objects and operations in the tool description. The SyntaX field of the example says that the syntax for a message is that of a noun phrase, i.e. any of the given head nouns (angle brackets indicate Oatterns of words), followed by any of the given postnominal Cases, preceded by any adjectives - none are given here, which can in turn be preceded by a determiner. The up.arrows in the Case patterns refer beck to slots of a message, as specified in the Scheme slOt of the example - the information in the Schema sl0t is aJso used by other parts of the interface. The actual grammar rules needed by FlexP are generated by first filling in a pre-stored skeleton pattern for NounPhrase, resulting in: <?determiner ,NesssgeAdJ MesssgeHead ,NessegeCass~; and then generating patterns for each of the Cases, substituting the appropriate FillerTypes for the slot names that appear in the patterns used to define the Cases, thus generating the subpatterns: <~[from Person> <%to Person> <Zdated Data> <Zslnce Date> The slot names are not discarded but used in the results of the subrules to ensure that the objects which match the substituted FillerTypes and up in the correct slot of the result produced by the top-level message rule. This compilation procedure must be performed in its entirety before any input parsing can be undertaken. While this approach to language definition was successful in freeing the language designer from having to know details of the parser essentially irrelevant tO him, it also made the process of language development very much slower. Every time the designer wished to make the smallest change to the grammar, it was necessary to go through the time-consuming compilation procedure. Since the development of a task.specific language typically involves many small changes, this has proved a significant impediment to the usefulness of FlexP. 151 The construction-specific approach offers a way round this problem. Since the parsing strategies and amOiguity representations are specific to particular constructions, it is possible to represent each different type of construction differently - there is no need to translate the language into a uniformly represented grammar. In addition, the constructions in terms of which it iS natural to define a language are exactly those for which there will be specific parsing strategies, and grammar representations. It therefore becomes possible to dispense with the coml~ilation step reauired for FlexP, and instead interpret the language definition directly. This drastically cuts the time needed to make changes to the grammar, and so makes the parsing system much more useful. For example, the Syntax slot of the previous example formalism might become: Syntax: [ SynType: NounPhrase Head: (message note (?piece ?of mail>) Cases : ( [Nerker: %from Slot: Sender] [Harker: 5;to Slot: Recipient,] [Ranker: %elated Slot.: Date] [Harket*: ~since Slot.: After] ) ] This grammar representation, equally convenient from a user's point of view, should be directly interpretable by a .parser specific to the NounPhrase case type of construction. All the information needed by such a parser, including a list of all the case markers, and the type of oblect that fills each case slot is directly enough accessible from this representation that an intermediate compilation phase should not be required, with all the ensuing benefits mentioned above for language development. 2. Carbonell, J. G. and Hayes, P. J. Dynamic strategy Selection in Flexible Parsing. Carnegie.Mellon University Computer Science Department, 1981. 3. Hayes. P. J. and Mouradian, G. V. Flexible Parsing. Proc. of 18th Annual Meeting of the Assoc. for Comput. Ling., Philadelphia, June, 1980, pp. 97-103. 4. Kaplan, S. J. Cooperative Responses from a Porfab/e Natural Language Data Base Quory System. Ph.D. Th., Dept. of Computer and Information Science, University of Pennsylvania, Philadelphia, 1979. 5. Kwasny, S. C. and Sondheimerl N. K, Ungrammaticalily and Extra. Grammaticality in Natural Language Understanding Systems. Proc. of 17th Annual Meeting of the Assoc. for Comput. Ling, La Jolla., Ca., August, 1979, pp. 19-23. 6. Weischedel, R. M. and Black, J. Responding to Potentially Unpareeable Sentences. Tech. Regt. 79/3, Dept. of Computer and Information Sciences, University of Delaware, 1979. 4. Conclusion There will be many occasions, even for a flexible parser, when complete, unambiguous parsing of the input tO an interactive system is impossible. In such circumstances, the parser should interact with the system user to resolve the problem. Moreover, to make things as easy as possible for the user, the system should phrase its request for clarafication in terms that fOCUS as tightly as possible on the real source and nature of the difficulty. In the case of ambiguity resolution, this means that the parser must produce a representation of the ambiguity that does not duplicate unambiguous material, This implies specific ambiguity rel~resentations for each b/De of construction recognized by the parser, and corresponding specific parSthg strategies to generate such representations. There are other advantages to a construction- specific approach including more accurate and efficient correction of ungrammaticality, more efficient parsing of grammatical input, and easier task.specific language development. This final benefit arises because a construction.specific approach allows a language definition that is natural in terms of the task domain to be interpreted directly without compilation into a uniform grammar formalism, thus greatly speeding the testing of changes to the language definition. Acknowledgement Jaime Carbonell provided valuable comments on earlier drafts of this paper. References 1. Ball. J. E. and Hayes, P.J. Representation of Task.Independent Knowledge in a Gracefully Interacting User Interface. Proc. 1st Annual Meeting of the American Association for Artificiat Intelligence, American Assoc. for Artificial Intelligence, Stanford University, August, 1980, pp. 116-120. ].52
1981
33
CONTROLLED TRANSFORMATIONAL SENTENCE GENERATION Madeleine Bates Bolt Beranek and Newman, Inc. Robert Ingria Department of Linguistics, MIT I. INTRODUCTION This paper describes a sentence generator that was built primarily to focus on syntactic form and syntactic relationships. Our main goal was to produce a tutorial system for the English language; the intended users of the system are people with language delaying handicaps such as deafness, and people learning English as a foreign language. For these populations, extensive exposure to standard English constructions (negatives, questions, relatlvization, etc.) and their interactions is necessary. • The purpose of the generator was to serve as a powerful resource for tutorial programs that need examples of particular constructions and/or related sentences to embed in exercises or examples for the student. The focus of the generator is thus not so much on what to express as on how to express it in acceptable English. This is quite different from the focus of most other language generation systems. Nonetheless, our system could be interfaced to a more goal-directed semantic component. The mechanism of transformational grammar was chosen because it offered both a way to exercise tight control over the surface syntactic form of a sentence and a good model for the production of groups of sentences that are syntactically related (e.g. the active and passive forms of a transitive sentence). By controlling (at a very high level) the rules that are applied and by examining the detailed syntactic relationships in the tree structures at each end of the derivation, the tutorial part of the system accesses a great deal of information about the syntax of the sentences that are produced by the generator; this knowledge is used to give explanations and hints to the user in the context of the particular exercise that the student is attempting. The transformational generator is composed of three magor parts: a base component that produces base trees, a transformer that applies transformational rules to the trees to derive a surface tree, and a set of mechanisms to control the operation of the first two components. We will discuss each of the components of this system separately. 2. THE BASE COMPONENT The base component is a set of functions that implicitly embody context free rules for creating a tree structure (phrase marker) in the X-bar framework (as discussed by Chomsky (1970), Jackendoff (1974), Bresnan (1975) and others.) In this system, the major syntactic categories (N(oun), V(erb), A(djective) and P(reposltion)) are treated as complex symbols which are decomposable into the features [~N] and [~V]. This yields the following cross- classification of these categories: This work was sponsored by BEH grant ~G007904514. V ÷I Figure i. Features in the X-bar System The feature "N" marks a given category as "nounlike" (and thus corresponds to the traditional grammatical notion of "substantive") while "V" marks a category as "verblike." Nouns and Adjectives are [÷N] because they share certain properties (e.g. Adjectives can be used in nominal contexts; in highly inflected languages, Adjectives and Nouns typically share the same inflectlonal paradigms, etc.) Adjectives and Verbs are [+V] because they share (among other things) various morphological traits (e.g. certain verbal forms, such as participles, have adjectival properties). Verbs and Prepositions are I-N] because they display common complement selection attributes (e.g. they both regularly take Nominal complements that bear Accusative Case.) (For further discussion of the issue of feature decomposition, and for some alternative proposals, see Jackendoff (1978) and George (1980a, Section 2; 1980b, Section 2).) In addition, each syntactic category contains a specification of its rank (given in terms of number of bars, hence the term "X-bar" system). For instance, a Noun (N) is of rank 0 and is marked with no bars whereas the Noun Phrase which it heads is of the same category but different (higher) rank. Intermediate structures are also permitted; for instance, V * (read "V bar") is that portion of the Verb Phrase which consists of a Verb and its complements (e.g. direct and indirect objects, clausal complements, prepositional phrases, etc.) while V ~ (read "V double bar") includes V ~ as well as Auxiliary elements. For our purposes, we have adopted a uniform two-level structure across categories~ that is, each category X is taken to have X ~* as its highest rank, so that Noun Phrase (NP) in our system is N ~, Verb Phrase is V ~', etc. Minor categories (such as DET(erminer), AUX(ilfary), NEG(ative), etc.) stand outside this system, as do S(entence) and S ~ (a sort of super sentence, which contains S and clause introducing elements (or "subordinating conjunctions") such as that). These categories are not decomposable into the features [÷N] and [+V], and, except for S and S" , they-do not ~ave different ranks. (It should be noted that the adoption of a uniform two-level hypothesis and the placlng of S and S ~ outside of the normal X-bar system are not uncontroversial--see e.g. Jackendoff (1978) and George (1980a, Section 2; 1980b, Section 2). However, these assumptions are found in many variants of the X-bar framework and are adequate for our purposes.) 153 An example of the internal structure of the P'" corresponding to the phrase "to the sad boys" is given below: p'" [ -v -N ] P" [ -V -N ] P [ -V-N ] to N ~ [ ~N -V PER. 3 +DEF WU.PL +HUMAN GENDER.MALE ] DET [ +DEF ] the A ~" [ +N +V ] A ~ [ +N +V ] A [ +N +V ] sad N ~ [ +N -V PER. 3 +DEF NU.PL +HUMAN GENDER.MALE ] N [ +N -V PER. 3 +DEF NU.PL +HUMAN GENDER.MALE ] boy Figure 2. Part of A Sample Base Structure This system of cross-classification by features and by rank permits the creation of transformations which can refer to a specific rank or feature without referring to a specific major category. (See Bresnan (1975) for further discussion of this point.) For example, the transformation which fronts WH- words to form WH-Questions treats any X ~ category as its target and, hence, can be used to question any of the major categories (e.g. A'~--"how big is it?"; N''--"what did they do?" "which men left?"; P~'--"to whom did you give it?"). Similarly, the transformation which marks Accusative Case on pronouns applies only to those N~'s which follow a I-N] category; i.e. only to those N~s which are the objects of Verbs or Prepositions. This allows us to create extremely versatile transformations which apply in a variety of contexts, and frees us from the necessity of creating several transformations, each of which essentially replicates the Structural Description and Structural Change of the others, differing only in the category of the affected term. A set of constraints (discussed further below) is the input to the base component and determines the type of base structure which is produced. A base structure has both the usual features on the nodes (category features such as [+N] and [-v], and selectional features such as [+PROPER]) and some additional diacritic features (such as [-C], for case marking) which are used to ,govern the application of certain transformations. Lexical insertion is an integral part of the construction of the tree by the base component. It is not essential that words be chosen for the sentence at this time, but it is convenient because additional features in the structure (such as [+HUMAN], [+MALE]) are needed to guide some transformations (for instance, the insertion of the correct form of oronouns.) In our current system, the choice of words to be inserted in the base structure is controlled by a dictionary and a semantic networM which embodies a limited number of semantic class relationships and case restrictions to orohibit the production of utterances like "The answer saw the angry cookie." The network nodes are chosen at random for each sentence that is generated, but a more powerful semantic component could be used to convey particular "messages," provided only that it could find lexical items to be inserted in the small number of positions required by the base constraints. 3. THE TRANSFORMATIONAL COMPONENT Each transformational rule has a Structural Description, Structural Change, and (optional) Condition; however rules are not marked as optional or obligatory, as they were in traditional transformational theory (e.g. Chomsky (1955)). Obligatory transformations whose structural descriptions were met would apply necessarily; optional transformations would apply at random. Moreover, various versions of transformational grammar have employed transformations as "filters" on possible derivations. In older work (e.g. the so-called "Standard Theory" (ST) of Chomsk7 (1965)) derivations in which a transformation required in a given syntactic configuration failed to apply would block, causing the result to be ruled out as ungrammatical (op. clt., p. 138). In more recent theories (e.g. the "Extended Standard Theory" (EST) of Chomsky (1977) and Chomsky and Lasnik (1977)) all transformations are optional, freely ordered and may apply at random. Those derivations in which a transformation misapplies are ruled out by independent conditions on the structures produced by the operation of the transformational component (Chomsky (1977, p. 76)). These frameworks adopt a "generate and test" approach, wherein the misapplication of transformations during the course of a derivation (e.g. the failure of a required transformation to apply (ST, EST) or the application of a transformation in a prohibited syntactic configuration (EST)) will result in a rejection of this possible derivation. The application of different optional transformations results in the production of a variety of surface forms. There are two reasons why we do not use this generate and test approach. The first is that it is computationally inefficient to allow the transformations to apply at random and to check the result to make sure that it is grammatical. More importantly, we view the transformations as tools to be used by a process outside the sentence generator . itself. That is, an external process determines what the surface syntactic form of a given base structure should be; the transformations are not independent entities which make this decision on their own. For example, a focus mechanism should be able to select or prohibit passive sentences, a dialogue mechanism should be able to cause agent-deletion, and so on. In OUr application, tutorial programs select the characteristics of the sentences to be produced on the basis of the syntactic rule or rules being exercised in the particular tutorial. The Structural Change of each transformation consists of one or more functions, analogous to the transformational elementaries of traditional transformational theory (Chomskv (1955, pp. 402-407, Section 93.1)). We have 154 not adopted the restriction on the Structural Change of transformations proposed by more recent work in generative grammar (e.g. Chomsky (1980, p. 4)) which prohibits "compounding of elementaries"; i.e. which limits the Structural Change of a transformation to a single operation. This would require breaking up many transformations into several transformations, each of which would have to apply in the derivation of a particular syntactic construction rather than having one transformation that performs the required operations. Inasmuch as we are interested in utilizing the generative capacity of transformational grammar to produce specific constructions, this break up of more general, overarching transformations into smaller, more specific operations is undeslrable. The operations that are performed by the rules are a combination of classic transformational operations (substitution, adjunction, deletion, insertion of non-lexical elements such as "there" and "do") and operations that llnguists sometimes relegate to the base or post- transformational processes (insertion of pronouns, morphing of inflected forms). By making these operations rule-speclflc, many related forms can be produced from the same base tree and the control mechanisms outside the generator itself can speclfv which forms are to be produced. (Figure 3 shows some of the transformations currently in the system.) SUBJECT-AUX-INVERSION SD: (S ~ (FEATS (TRANS.1)) COMP (FEATS (WH.+)) 1 2 (S N *~ TNS (OPT NODE (FEATS (M.+))))) 3 4 5 6 SC: (DeleteNode 6) (DeleteNode 5) (LChomsk7 2 6) (LChomsky 2 5) Condition: [NOT (EQ (QUOTE +) (FeatureValue (QUOTE WR) (RootFeats 4] RELATIVE-PRONOUN-SPELL-OUT [REPEATABLE] SD: (S* XX (N "~ N "~ (S" (COMP X (N ~" 1 2 3 4 5 6 (FEATS (WH . +)) WH))))) 7 SC: (DeleteSons 6) (LSon 6 (if (EQ "+(GetFeat 6 ~ HUMAN)) then "who else ~whlch)) Figure 3. Sample Transformations Those transformations which affect the syntactic form of sentences are apnlied cyclically (see (Chomsky (1965, p. 143) for more details). Thus transformations apply from the "bottom up" durinq the course of a the transformations are strictly iand extrinsically) ordered. In addition to the cyclic syntactic transformations there exists a set of post-cyclic transformations, which apply after all the cyclic syntactic transformations have applied. These post-cyclic transformations, whose domain of operation ranges over the entire syntactic tree, supply the correct morphological forms of all lexical and grammatical items. This includes qlvlna the correct plural forms of nouns, the inflected forms of verbs, the proper forms of pronouns (e.g. "he," "she" and "they" in subject position and "him," "her," and "them" in object position), etc. While it has been relatively rare in recent transformational analyses to utilize transformations to effect this type of morphological "spell-out," this mechanism was first proposed in the earliest work in generative grammar (Chomsky (1955)). Moreover, recent work by George (1980a; 1980b) and Ingria (in preparation) suggests that this is indeed the correct way of handling such morphological processes. The transformations as a whole are divided up into "families" of related transformations. For example, there is a family of transformations which apply in the derivation of questions (all beginning with the prefix WH-); there is a family of morphlng transformations (similarly beginning with the flagged mnemonic prefix MORPH-). These "families" of transformations provide detailed control over the generation process. For example, all transformations of the W~- family will apply to a single syntactic position that may be questioned (e.g. subject, direct object, object of preposition, etc.), resulting in questions of the form "Who died" and "To whom did she speak." This familial characterization of transformations is similar to the classical transformational approach (Chomsky (1955, p. 381, Section 90.1)) wherein families of transformations were first postulated, because of the condition imposed within that framework that each transformation must be a single- valued mapping. Our current sentence generator produces declarative sentences, passive sentences (with and without agent deletion), dative movement sentences, yes-no questions and wh-queetlons (including multlple-wh questions such as "Who gave what to whom?'), there-insertlon sentences, negated sentences (including both contracted and emphatic forms), relative clauses, finite and infinitival complements (e.g., "The teacher wanted Kathy to hurry.'), imperative sentences, various complex auxiliaries (progressive, perfective, and modals), predicate adjectives, and predicate nominals. Although not all of these constructions are handled in complete generality, the generator produces a very large and natural subset of English. It is important to note that the interactions among all these transformations have been taken into account, so that any meaningful co~blnatlon of them will produce a meaningful, grammatical sentence. (Appendix A lists some of the sentences which have been produced by the interaction of various transformations.) derivation, applying first in the most embedded In our application, there is a need to generate clause and then working upwards until the ungrammatical utterances occasionally (for matrix clause is reached. Within each cycle example, in a tutorial exercising the student's 155 ability to judge • the grammaticalitv of utterances). To this end, we have developed an additional set of transformations that can he used to generate utterances which mimic the ungrammatical forms found in the writing of the language delayed populations for which this system is intended. For example, deaf and hearing-impaired children often have difficulty with negative sentences, and replace the not of Standard English negation with no and/or place the negative element in positions in which it does not occur in Standard English (e.g. "The mouse is no a big animal," "The girl no has gone," "Dogs not can build trees"). The fact that these ungrammatical forms may be modelled with transformations is highly significant, and lends support to the claim (Chapman (1974), Fromkin (1973)) that ungrammatical utterances are rule-driven. 4. HIGHER LEVELS OF CONTROL In order to manage the creation of the base trees and the application of the transformational rules, we have developed several layers of control mechanisms. The first of these is a set of constraints that directs the operation of the base comoonent and indicates which transformations to try. A transformational constraint merely turns a particular transformation on or off. The fact that a transformation is turned on does not guarantee that it will apply; it merely indicates that the Structural Description and Condition of that transformation are to be tried. Base constraints can have either atomic indicators or a list of constraints as their values. For example, the direct object constraint (DIROBJ (PER 3) (NU PL) ...) specifies all the base constraints necessary to produce the N'" subtree for the direct object position in the base structure. There are a number of dependencies which exist among constraints. For example, if the transformational constraint for the passive transformation is turned on, then the base component must be instructed to produce a direct object and to choose a main verb that may be passivized; if the base constraint for a direct object is turned off, then the base constraint for an indirect object must be turned off as well. A data base of implications controls the application of constraints so that whenever a constraint is set (or turned off), the base and/or transformational constraints that its value implies are also set. The notion of a particular syntactic construction transcends the distinction between base and transformational constraints. The "natural" specification of a syntactic construction such as passive or relative clause should be made without requirinq detailed knowledge of the constraints or "their implications. In addition, one might want to request, say, a relative clause on the subject, without specifying whether the target of relativization is to be the subject or object of the embedded clause. We have developed a data base of structures called synspecs (for "syntactic specifications") which embody, at a very high level, the notion of a syntactic construction. These constructions cannot be identified with a single constraint or its implied constraints. (Implications specify necessary dependencies; synspecs specify possible but not necessary choices on the part of the system designers about what combinations of constraints should be invoked under a general name.) A synspec can contain an element of choice. The choice can be made by any user-defined function, though in our practice most of the choices are made at random. One example of this is a synspec called wh-question which decides which of the synspecs that actually set up the constraints for a wh-question (question-on- subject, question-on-object, question-on- dative, etc.) should be used. The synspecs also provide convenient hooks on which to hang other information associated with a syntactic construction: sentences exemplifying the construction, a description of the construction for purposes of documentation, etc. Figure 4 snows how several of the synspecs look when printed for the user. wh-question Compute : (PickOne "(question-on-subject question-on-object question-on-dative)) Description : (This SynSpec will create any one of the questions with WH-words.) second-person-imperative BaseConstraints : ((IMPERATIVE . 2) (TNS)) TransConstraints : ((REQUEST-VOCATIVE-DELETION . +} (REQUEST-EXCLAMATION-INSERTION . +) (REQUEST-YOU-DELETION . +)) Examples : ('Open the door!") Figure 4. Sample SynSpecs Synspecs are invoked through a simple mechanism that is available to the tutorial component of the system. Each tutorial specifies the range of constructions relevant to its topic and chooses among them for each sentence that is to be generated. To produce related sentences, the generator is restarted at the transformational component (using the previous base tree) after the synspecs specifying the relationship have been processed.) Just as constraints have implications, so do synspecs. The relationships that hold among synspecs include exclusion (e.g. transitive- sentence excludes predicate-nominal-sentence), requirement (e.g. extraposed-relative requires relative-clause-on-subject or relatlve-clause- on-object), and permission (e.g. predicate- adverb-sentence allows there-insertion). A mechanism similar to the implications for constraints refines a set of candidate synspecs so that the user (or the tutorlals) can make choices which are consistent. Thus the user does not have to know, understand, or remember which combinations of choices are allowed. 156 Once some constraints have been set (either directly or through synspecs), a command can be given to generate a sentence. The generator first assigns values to the constraints that the user did not specify7 the values chosen are guaranteed to be compatible with the previous choices, and the implications of these choices ensure that contradictory specifications cannot be made. Once all constraints have been set, a base tree is generated and saved before the transformations are applied. Because the base structure has been saved, the transformational constraints can be reset and the generator called to start at the transformational component, producing a different surface sentence from the same base tree. As many sentences as are wanted can be produced in this way. 5. DEVELOPMENT TOOLS As one side effect of the development of the generative system, we have built a debugging environment called the syntactic playground in which a user can develop and test various components of the generator. This environment has become more important than the tutorials in testing syntactic hypotheses and exploring the power of the language generator. In it, dictionary entries, transformations, implications and synspecs can be created, edited, and saved using interactive routines that ensure the correct format of those data types. It is also possible here to give commands to activate synspecs; this operation uses exactly the same interface as programs (e.g. tutorials) that use the generator. Commands exist in the playground to set base constraints to specific values and to turn individual transformations on and off without activating the implications of those operations. This allows the system programmer or linguist to have complete control over all aspects of the generation process. Because the full power of the Interlisp system is available to the playground user, the base tree can be edited directly, as can any version of the tree during the derivation process. Transformations can also be "broken" like functions, so that when a transformation is about to be tried the generator goes into a "break" and conducts an interactive dialogue with the user who can control the matching of the Structural Description, examine the result of the match, allow (or not) the application of the Structural Change, edit the transformation and try it again, and perform many of the operations that are available in the general playground. In addition to the transformational break package there is a trace option which, if used, prints the constraints selected by the system, the words, and the transformations that are tried as they apply or fail. The playground has proved to be a powerful tool for exploring the interaction of various rules and the efficacy of the whole generation package. 6. CONCLUSION This is the most syntactically powerful generator that we know of. It produces sets of related sentences maintaining detailed knowledge of the choices that have been made and the structure(s) that have been produced. Because the notion of "syntactic construction" is embodied in an appropriately high level of syntactic specification, the generator can be externally controlled. It is fast, efficient, and very easy to modify and maintain; it has been implemented in both Interlisp on a DECSystem-20 and UCSD Pascal on the Cromemco and Apple computers. It forms the core of a set of tutorial programs for English now being used by deaf children in a classroom setting, and thus is one of the first applications of computational linguistics to be used in an actual educational environment. References Bresnan, Joan (1975) "Transformations and Categories in Syntax," in R. Butts and J. Hintikka, eds. Proceedings of the Fifth International Congress of Lo@ic-~- Me--~od~ and Philosophy of Sc~-ence, University of W-~tern Ontario, Lo-ndon,~io. Chapman, Robin S. (1974) The Interpretation of Deviant Sentences ~ ~ : A ~rmational Approach~- Janus Linguarum~ Series Minor, Volume 189, Mouton, The Hague. Chomsky, Noam (1955) The Logical Structure of Linguistic Theory, unpublished manuscript", microfilmed, MIT Libraries, partially published by Plenum Press, New York, 1975. Chomsky, Noam (1965) ~ of the Theory of S~ntax, MIT Press, Cambrldge, Ma'ssa---6~usetts. -- Chomsky, Noam (1970) "Remarks on Nominalization", in R.A. Jacobs and P.S. Rosenbaum, eds., Readings in Transformational Grammar, Ginn--and Co., Waltham, Mass. Chomsky, Noam (1973) "Conditions on Transformations", in S.A. Anderson and P. Kiparsky, eds., A Festschrlft for Morris Halle, Holt, Rinehart--and Winston, New~Yor-~. Chomsky, Noam (1977) "On WR-Movement", in P. Culicover, T. Wasow and A'~'AkmaJian, eds. Formal S~ntax, Academic Press, Inc., New York. Chomsky, Noam (1980) "On Binding," Linguistic Inquiry ll. Chomsky, Noam and Howard Lasnik (1977) "Filters and Control", Linguistic Inquiry 8. Fromkin, Victoria A. (1973) Speech Errors as Linguistic Evidence, Janua Ln~u~, ~-eri~ major, Volume 77, Mouton, The Hague. George, Leland M. (1980a) Analogical Generalization in Natural Langua_qe Syntax, unpublished Doct6~'al Dlsser'~aton,~. George, Leland M. (1980b) Analogical Generalizations of Natural Language Syntax, unpublished manus6"Fip6"7-~. Ingria, Robert (in preparation) Sentential Complementation in Modern Greek, Doctoral Dissertation, MIT. Jackendoff, Ray S. (1974) "Introduction to the X" Convention", distributed by Indiana University Linguistics Club, Bloomington. Jackendoff, Ray S. (1978) X" ~ S ntax: --A Study_ of Phrase Structure, Linguistic Inqulry Monograp-~ 157 ~ MIT Press, Cambridge, Mass. A~endix A: Sample Sentences 6. Superlative Sentences i. Transitive Sentences i. The bullies chased the girl. 2. What did the bullies do to the girl? 3. They chased her. 4. Who chased the girl? 5. The bullies chased her. 6. Who did they chase? 7. Whom did they chase? 8. They chased the girl. 9. How many bullies chased the girl? 10. Eight bullies chased the girl. Ii. How many bullies chased her? 12. Eight bullies chased her. 13. Who got chased? 14. The girl got chased. 15. She was chased by the bullies. 16. The girl was being chased by the bullies. 2. Intransitive Sentences i. What did the girl do? 2. She cried. 3. Who cried? 4. The girl cried. 3. Indirect Discourse i. Dan said that the girl is sad. 2. Dan said that she is sad. 3. Who said that the girl is sad? 4. Transitive Sentence with Indirect Object i. The generous boy gave a doll to the girl. 2. The generous boy gave the girl a doll. 3. The girl was given a doll. 4. A doll was given to the girl. 5. Who gave the girl a doll? 6. Who gave what to whom? 7. What did the generous boy give the girl? 8. He gave her a doll. 9. What did the generous boy give to the girl? i0. He gave a doll to her. ii. Who gave a doll to the girl? 12. Who gave the girl a doll? 13. Which boy gave the girl a doll? 14. The generous boy gave her a doll. 15. Which boy gave a doll to the girl? 16. The generous boy gave it to he-. 17. How many dolls did the generous boy give the girl? 18. He gave her one doll. 5. Comparative Sentences !. The soldier was better. 2. The gentleman will be more unhappy. 3. Alicia is hungrier than Jake. 4. The children were angrier than Andy. 158 I. A policeman caught the nicest butterflies. 2. A sheepdog was the sickest pet. 3. The fire chief looks most generous. 4. The smartest man swore. 5. The oldest bulldog broke the dolls. 7. Sentences with Infinitives I. The teacher wanted Kathy to hurry. 2. The gentleman promised the lady to close the door. 3. The girls were hard to ridicule. 8. Relative Clauses I. Whoever embraced the kids will embrace the ladies. 2. The girl who was intelligent cheated the adults. 3. The woman who greased the tricycle mumbled. 4. The teacher who lost the bulldogs swears. 9. Negative Sentences i. Kim won't help. 2. Claire didn't help. 3. The children won't shout. 4. Do not slap the ~oodles. 5. Do not cry. i0. Varieties of Quantlfiers i. No toy breaks. 2. Some excited boys kissed the women. 3. Some hungry people eat. 4. Two men cried. 5. Every new toy broke. 6. Not every man slips. 7. The boy won't give the dogs any oranges. 8. The girl doesn't see any cats. 9. The old men didn't tell the boys any thing. i0. The girl didn't love any body. ii. Varieties of Pronouns i. Bette is the sad one. 2. Gloria is the happy one. 3. Kevin is the saddest. 4. Kathy is the most cheerful. 5. Varda liked the sweet apple. 6. Varda liked the sweet one. 12. T~u~RE Sentences i. There were some toys in the dirt. 2. There were no toys in the dirt. 3. There weren't any toys in the dirt.
1981
34
TRANSPORTABLE NATURAL-LANGUAGE INTERFACES TO DATABASES by Gary G. Hendrlx and William H. Lewis SRI International 333 Ravenewood Avenue Menlo Park, California 94025 I INTRODUCTION Over the last few years a number of application systems have been constructed that allow users to access databases by posing questions in natural languages, such as English. When used in the restricted domains for which they have been especially designed, these systems have achieved reasonably high levels of performance. Such systems as LADDER [2], PLANES [10], ROBOT [1], and REL [9] require the encoding of knowledge about the domain of application in such constructs as database schemata, lexlcons, pragnmtic grammars, and the llke. The creation of these data structures typically requires considerable effort on the part of a computer professional who has had special training in computational linguistics and the use of databases. Thus, the utility of these systems is severely limited by the high cost involved in developing an interface to any particular database. This paper describes initial work on a methodology for creating natural-language processing capabilities for new domains without the need for intervention by specially trained experts. Our approach is to acquire logical schemata and lexical information through simple interactive dialogues with someone who is familiar with the form and content of the database, but unfamiliar with the technology of natural-language interfaces. To test our approach in an actual computer environment, we have developed a prototype system called TED (Transportable English Datamanager). As a result of our experience with TED. the NL group at SRI is now undertaking the develop=ant of a ~ch more ambitious system based on the sane philosophy [4]. II RESEARCH PROBLEMS Given the demonstrated feasibility of language-access systems, such as LADDER, major research issues to be dealt with in achieving transportable database interfaces include the following: * Information used by transportable systems must be cleanly divided into database- independent and database-dependent portions. * Knowledge representations must be established for the database-dependent part in such a way that their form is fixed and applicable to all databases and their content readily acquirable. * Mechanisms must be developed to enable the system to acquire information about a particular applicationfrom nonlinguists. III THE TED PROTOTYPE We have developed our prototype system (TED) to explore one possible approach to chase problems. In essence, TED is a LADDER-like natural-language processing system for accessing databases, combined with an "automated interface expert" that interviews users to learn the language and logical structure associated with a particular database and that automatically tailors the system for use with the particular application. TED allows users to create, populate, and edit ~heir own new local databases, to describe existing local databases, or even to describe and subsequently access heterogeneous (as in [5]) distributed databases. Most of TED is based on and built from components of LADDER. In particular, TED uses the LIFER parser and its associated support packages [3], the SODA data access planner [5], and the FAM file access manager [6]. All of these support packages are independent of the particular database used. In LADDER, the data structures used by these components ~re hand-generated for s particular database by computer scientists. In TED, however, they are created by TED's automated interface expert. Like LADDER, TED uses a pragmatic granmar; but TED's pragmatic gramemr does not make any asstmptlons about the particular database being accessed. It assumes only that interactions with the system will concern data access or update, and that information regarding the particular database will be encoded in data structures of a prescribed form, which are created by the automated interface expert. The executive level of TED accepts three kinds of input: questions stated in English about the data in files that have been previously described to the system; questions posed in the SODA query language; single-~ord commands that ~nltlaCe dialogues with the automated interface expert. zv THE *.Ta~A~ I~r~FAC~ )X~RT A. Philosoph 7 TED's mechanism for acquiring inforaatlon about a particular database application Is to conduct interviews wlth users. For such Intervlews to be successful, The work reported herein was supported by the Advanced Research Projects Agency of the Department of Defense under contracts N00039-79-C-0118 and NOOO39-80-C-O6A5 wlth the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the authors and should not be interpreted as representative of the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency of the U.S. Government. 159 * There must be a range of readily understood questions that elicit all the information needed about a new database. * The questions must be both brief and easy to understand. * The system must appear coherent, ellciting required information in an order comfortable to the user. * The system must provide substantial assistance, when needed, to enable a user to understand the kinds of responses that are expected. All these points cannot be covered herein, but the sample transcript shown at the end of this papert in conjunction with the following discussion, suggests the manner of our approach. B. Strategy A key strateSy of TED is to first acquire information about the structure of files. Because the semantics of files is relatively well understoodt the system thereby lays the foundation for subsequently acquiring information about the linguistic constructions likely to be used in questions about the data contained in the file. One of the single-word co----nds accepted by the TED executive system is the command NEW, which initiates a dialogue prompting the user to supply information about the structure of a new data file. The NEW dialogue allows the user to think of the file as a table of information and asks relatively simple questions about each of the fields (columns) in the file (table). For example, TED asks for the heading names of the columns, for possible synonyms for the heading names, and for information about the types of values (numeric, Boolean, or symbolic) that each column can contain. The heading names generally act like relational nouns, while the information about the type of values in each column provides a clue to the column's semantics. The heading name of a symbolic column tends to he the generic name for the class of objects referred to by the values of that column. Heading names for Boolean columns tend co be the names of properties that database objects can possess. T.f a column contains numbers, thls suggests that there may be some scale wlth associated adjectives of degree. To allow the system to answer questions requiring the integration of information from multiple files, the user is also asked about the interconnections between the file currently being defined and other files described previously. C. Examples from a Transcript In the sample transcript at the end of this paper, the user initiates a NEW dialogue at Point A. The automated interface expert then takes the initiative in the conversation, asking first for the name of the new file, then for the names of the file's fields. The file name wlll be used to dlstlngulsh the new file from others during the acquisition process. The field names are entered into the lexicon as the names of attributes and are put on an agenda so that further questions about the fields may be asked subsequently of the user. At this point, TED still does not know what type of objects the data in the new file concern. Thus, as its next task, TED asks for words that might be used as generic names for the subjects of the file. Then, at Point E, TED acquires Information about how to identify one of these subjects co the user and, at Point F, determines what kinds of pronouns might be used to refer to one of the subjects. (As regards ships, TED is fooled, because ships may be referred to by "she.") TED is progra-,~ed wlch the knowledge that the identifier of an object must be some kind of name, rather than a numeric quantity or Boolean value. Thus, TED can assume a priori that the NAME field given in Interaction E is symbolic in nature. At Point G, TED acquires possible synonyms for NAME. TED then cycles through all the other fields, acquiring information about their individual semantics. At Point H, TED asks about the CLASS field, but the user doesn't understand the question. By typing a question eu'rk, the user causes TED to give a more detailed explanation of what it needs. Every question TED asks has at least two levels of explanation that a user may call upon for clarification. For example, the user again has trouble at J, whereupon he receives an extended explanation with an example. See T also. Depending upon whether a field is symbolic, arithnetic or Boolean, TED makes different forms of entries in its lexicon and seeks to acquire different types of information about the field. For example, as at Points J, K and ¥, TED asks whether symbolic field values can be used as modifiers (usually in noun-~oun combinations). For arithmetic fields, TED looks for adjectives associated with scales, as is illustrated by the sequence 0PQR. Once TED has a word such as OLD, it assumes MORE OLD, OLDER and OLDEST may also be used. (GOOD-BETTER-BEST requires special intervention. ) Note the aggressive use of previously acquired information in formulating new questions to the user (as in the use of AGE, and SHIP at Point P). We have found that this aids considerably in keeping the user focused on the current items of interest co the system and helps to keep interactions brief. Once TED has acquired local information about a new file, it seeks to relate it to all known files, including the new file itself. At Points Z through B+, TED discovers chat the *SHIP* file may be Joined with itself. That is, one of the attrlbutes of a ship is yet another ship (the escorted shlp)j which may itself be described in the same file. The need for this information is illustrated by the query the user poses at Point G+. TO better illustrate linkages between files, the transcript includes the acquisition of a second file about ship classes, beginnlng at Point J+. Much of thls dialogue is omitted but, aC L÷s TED learns there is a link between the *SHIP* and *CLASS* files. At /4+ it learns the direction of 160 this link; at N+ and O+ it learns the fields upon which the Join must be made; at P+ it learns the attributes inherited through the llnk. This information Is used, for example, In answering the query at S+. TED converts the user's question "What Is the speed of the hoel?" into '~hat is the speed of the class whose CN~ is equal to the CLASS of the hoel?." Of course, the whole purpose of the NEW dialogues is to make it possible for users to ask questions of their databases in English. Examples of English inputs accepted by TED are shown at Points E+ through I+, and S+ and T+ In the transcript. Note the use of noun-noun combinations, superlatives and arithmetic. Although not illustrated, TED also supports all the available LADDER facilities of ellipsis, spelling correction, run-time gram,~r extension end introspection. V THE PRACHATIC GRAMMAR The pragmatic grammar used by TED includes special syntactic/semantic categories that are acquired by the NEW dialogues. In our actual implementation, these have rather awkward names, but they correspond approx/macely to the following: * <GENERIC> is the category for the generic names of the objects in files. Lexlcal properties for this category include the name of the relevant file(s) and the names of the fields that can be used Co identify one of the objects to the user. See transcript Points D and E. * <ID.VALUE> is the category for the identifiers of subjects of individual records (i.e., key-field values). For example, for the *SHIP* file, it contains the values of the NAME field. See transcript Point E. * <MOD.VALUE> is the category for the values of database fields that can serve as modifiers. See Points J and K. * <NUM.ATTP.>, <SYM.ATTR>, and <BOOL.ATTP.> are n,--eric, symbolic and Boolean attributes, respectively. They include the names of all database fields and their synonyms. * <+NUM.ADJ> is the category for adjectives (e.g. OLD) associated with numeric fields. Lexlcal properties include the name of the associated field and flies, as veil as information regarding whether the adjective is associated with greater (as In OLD) or lesser (as in YOUNG) values in the field. See Points P, Q and R. * <COMP.ADJ> and <SUPERLATIVE> are derived fro= <+NUM.ADJ>. Shown below are some illustrative pragmatic production rules for nonlexlcal categories. As in the foregoing examples, these are not exactly the rules used by TED, but they do convey the unCure of the approach. <S> -> <PRESENT> THE <ATTP.> OF <ITEM> what is the age of the reeves HOW <+NUM.ADJ> <BE> <ITEM> how old is the youngest ship <WHDET> <ITEM> <HAVE> <FEATURE> what leahy ships have a doctor <WHDET> <ITEM> <BE> <COMPLEMENT> which ships are older then reeves <PRESENT> -> WHAT <BE> PRINT <ATrR> -> <NUM.ATTR> <SYM.ATTR> <BOOL.ATTK> <ITEM> -> <GENERIC> ships <ID.VALUE> reeves THE <ITEM> the oldest shlp <MOD.VALUE> <ITEM> leahy ships <SUPERLATIVE> <ITEM> fastest ship with • doctor <ITEM> <WITH> <FEATURE> ship with a speed greater than 12 <FEATURE> -> <BOOL.ATTR> doctor / poisonous <NUN.ATTE> <NUM.COMP> <NUMBER> age of 15 <NUM.ATTR.> <NUM.COMP> <ITEM> age greater than reeves <NUM.COMP> -> <COMP.ADJ> THAN OF (GREATER> THAN <COMPLEMENT> -> <COMP.A/kJ> THAN <ITEM> <COMP.ADJ> THAN <NUMBER> These pragmatic Era-mar rules are very much like the ones used in LADDER [2], but they differ from those of LADDER in two critical ways. (1) They capture the pragmatics of accessing databases without forcibly £ncludin8 information about the praSmatics of any one particular set of data. (2) They use s~tsct4~/semantic categories that support the processes of accessln8 databases, but that are domsin- independent and easily acquirable. It is worth noting that, even when a psrClcular application requires the introduction of Special- purpose rules, the basic pragmatlc grmamar used by TED provides a starting point from whlch domain- specific features can be added. VI DIRECTIONS FOR FURTHER WORK The TED system represents a first step toward truly portable natural-language interfaces to database systems. TED is only a prototype, however, and --,ch additional work will be required 161 to provide adequate syntactic and conceptual coverage, as well as to increase the ease with which systems may be adapted to new databases. A severe limitation of the current TED system is its restricted range of syntactic coverage. For example, TED deals only with the verbs BE and HAVE, and does not know about units (e.g., the Waddel's age is 15.5, not 15.5 YEARS). To remove this limitation, the SRI NL group is currently adapting Jane Robinson's extensive DIAGRAM grammar {7] for use in a successor Co TED. In preparation for the latter, we are experimenting with verb acquisition dialogues such as the following: > VERB Please conjugate the verb (e.g. fly flew flown) > EARN EARNED EARNED EARN is: 1 intransitive (John dines) 2 transitive (John eats dinner) 3 dicransitive (John cooks Mary dinner) (Choose the most general pattern) > 2 who or what is EARNED? > A SALARY who or what EARNS A SALARY? > AN EMPLOYEE can A SALARY be EARNED by AN EMPLOYEE? > YES can A SALARY EARN? > NO can AN ~dPLOYEE EARN? > NO Ok:, an EMPLOYEE can EARN a SALARY What database field identifies an EMPLOYEE? > NAME What database field identifies a SALARY? > SALARY extensive conceptual and symtacclc coverage continues to pose a challenge to research, a polished version of the TED prototype, even with its limited coverage, would appear to have high potential as a useful tool for data access. KEFER£NCES 1. L.R. Harris, "User Oriented Data Base Query with the ROBOT Natural Language Query System," Proc. Third International Conference o.~n Vet [ Large Data Bases; Tokyo (October 1977). 2. G.G. Hendrix, E. D. Secerdoti, D. Sagalowicz, and J. Slocum, "Developing a Natural Language Interface to Complex Data," ACH Transactions on Database Systems , Vol. 3,--~. 2 (June 1978). 3. G.G. Hendrix, "Human Engineering for Applied Natural Language Processing," Proc. 5th International Joint Conference on Artificial 4. 5. The greatest challenge to extending systems like TED is to increase their conceptual coverage. As pointed out by Tennant [8], umers who are accorded natural-language access co a database 6. expect not only to retrieve information directly stored there, but also co compute "reasonable" derivative information. For example, if a database has the location of two ships, users will expect the system to be able to provide the distance between them--an item of information not directly 7. recorded in the database, but easily computed from the existing data. In general, any system that is tO be widely accepted by users must not only provide access to primary information, but uast also enhance the latter with procedures that 8. calculate secondary attributes from the data actually stored. Data enhancement procedures are currently provided by LADDER and a few other hand- built systems, but work is needed now to devise means for allowing system users to specify their own database enhancement functions and to couple 9. these wlth the natural-language component. A second issue associated with conceptual coverage is the ability to access information extrinsic to the database per se, such as where the data are stored and how the fields are defined, as 10. well as information about the status of the query system itself. In summary, systems such as LADDER are of limited utility unless they can be transported to new databases by people with no significant formal training in computer science. Although the development of user-specifiable systems with Intelligence, Cambridge, Massachusetts (August 1977). G. G. Nendrix, D. Sagalowlcz and E. D. Sacerdoti, "Research on Transportable English- Access Hedia to Distributed and Local Data Bases," Proposal ECU 79-I03, Artificial Intelligence Center, SRI International, Menlo Park, California (November 1979). R. C. Moore, "Kandling Complex Queries in a Distributed Data Ease," Technical Note 170, Artificial Intelligence Center, SRI International Menlo Park, California (October 1979). P. Morris and V. Sagalowicz, '~lanaging Network Access to a Distributed Data Base," Proc. Second Serkele~ Workshop on Distributed Data Hana6e~enc and Computer Networks, gerkeley, California ~ y ~ J. J. Robinson, "DIAGRAH: A Gra~aar for Dialogues," Technical Note 205, Artificial Intelligence Center, SRI Intsrnatlonal Menlo Park, California (February 1980). H. Tennant, '~xperience with the Evaluation of Natural Language Question Answerers," Proc% Sixth International Joint Conference on Artificial Intelligence, Tokyo, Japan (August 1979)o F. g. Thompson and B. H. Thompson, "Practical Natural Language Processing: The REL System as Prototype," pp. 109-168, M. Rublnoff and M. C. ¥ovlts, ads., Advances In.Computers 13 (Academic Press, New ¥o~, 1975). D. Waltz, "Natural Language Access to a Large Data Base: An Engineering Approach," Proc. 4th. International Joint Conference on Artificial Intelligence, Tbilisi, USSR, pp. 868-872 (September 1975). 162 e-° *,.4 m ~^ z " ® ~ ~ ~ w-~ ¢: • m *" o . ~ .~ ,~ ..~ .,-*V , .~ ~ ~ ' ; ~ ~ ~.~ ,~'~ ~ ~.~ ~ ~ ~ ~. ~ ~ .----_----__ ------------ ~ ~,~A ~ ~,~^ z t~ Z "~ ~.~ ~,~1 I~ ~ TM : ~ ~ ~ ~^ :~ o s., ~ w v~d ...~ ~ ~ 163 mU = =~ <.= = F- :3 m: = ~0~ ,-, ~ ^L u~a - = ~" < < =~ ~ • J ~. A ° =~ aN °~ u~ 0 0 C "-" o = : ~ ~ =: ,m o" " ! " ~ = ~ ~, ÷ + =~ ~ _= Z='~. =o 164 "~w ZZ ~ • 0 41 ~ ~p a :=~ o- F-, " 8 I ~SX ~ ~ ~ g~ -.. ., m,~ ~ ~,,-I IU u,~ .,c m k ~=. k.. m 4~ = ~o ~ 2 Z X: 4c ,.I Z CM ~ E~ ~J • ° . ~4t ,-44~ G Ic L: ~4t t~ *a .,=4,-4 0 0~*~ 0 ..~.5~ ~ Z=~ g .- ~ 4¢ 41 4c 4c 4t 41 4e 41 4c 4~ 4t aL 41 ~ ~ ~ u~ ® .o=a,,,~ .~5 "Z o ÷ ÷ +, ~ ÷ ÷ 165
1981
35
Chart Parsing and Rule Schemata in PSG Henry Thompson Dept. of Artificial Intelligence, Univ. of Edinburgh, Hope Park Square, Meadow Lane, Edinburgh, EH8 9NW INTRODUCTION MCHART is a flexible, modular chart parsing framework I have been developing (in Lisp) at Edinburgh, whose initial design characteristics were largely determined by pedagogical needs. PSG is a gr---n-tical theory developed by Gerald Gazdar at Sussex, in collaboration with others in both the US and Britain, most notably Ivan Sag, Geoff Pull,--, and Ewan Klein. It is a notationally rich context free phrase structure grumm~r, incorporating meta-rules and rule schemata to capture generalisations. (Gazdar 198Oa, 1980b, 1981; Gazdar & Sag 1980; Gazdar, Sag, Pullum & Klein to appear) In this paper I want to describe how I have used MCHART in beginning to construct a parser for gr-mm-rs express- ed in PSG, and how aspects of the chart parsing approach in general and MCHART in particular have made it easy to acco~mmodate two significant aspects of PSG: rule schemata involving variables over categories; and compound category symbols ("slash" categories). To do this I will briefly introduce the basic ideas of chart parsing; describe the salient aspects of MEHART; give an overview of PSG; and finally present the interest- ing aspects of the parser I am building for PSG using MCHART. Limitations of space, time, and will mean that all of these sections will be brief and sketchy - I hope to produce a much expanded version at a later date. I. Chart Parsing The chart parsing idea was originally conceived of by Martin Kay, and subsequently developed and refined by him and Rot Kaplan (Kay 1973, 1977, 1980; Kaplan 1972, 1973a, 19735). The basic idea builds on the device known as a well formed substring table, and transforms it from a passive repository of achieved results into an active parsing agent. A well formed substring table can be considered as a directed graph, with each edge representing a node in the analysis of a string. Before any parsing has occurred, all the nodes are (pre)terminal, as in Figure I. Figure I. Kim saw the child with he lass$ N V O N P D N Non-terminal nodes discovered in the course of parsing, by whatever method, are recorded in the WFST by the addition of edges to the graph. For example in Figure 2 we see the edges which might have been added in a parsing of the sentence given in Figure I. Figure 2. S The advantage of the WFST comes out if we suppose the gr~--.=r involved reeognises the structural ambiguity of this sentence. If the parsing continued in order to produce the other structure, with the PP attached at the VP level, considerable effort would be saved by the WFST. The subject NP and the PP itself would not need to be reparsed, as they are already in the graph. What the chart adds to the WFST is the idea of active edges. Where the inactive edges of the WFST (and the chart) represent complete constituents, active edges represent incomvlete constituents. Where inactive edges indicate the presence of such and such a constituent, with such and such sub-structure, extending from here to ~here, active edges indicate a stage in the search for a constituent. As such they record the category of the constituent under construction, its sub-structure as found so far, and some specification of how it may be extended and/ or completed. The fund~umental principle of chart parsing, from which all else follows, is keyed by the meeting of active with inactive edges: The Fundamental Rule ******************** Whenever an active edge A and an inactive edge I meet for the first time, if I satisfies A's conditions for extension, then build a* new edge as follows: lts left end is the left end of A Its right end is the right end of I Its category is the category of A Its contents are a function (dependent on the grammatical formalism employed) of the contents of A and the category and contents of I It is inactive or active depending on whether this extension completes A or not Note that neither A nor I is modified by the abvve process - a completely new edge is constructed, independent of either of =hem. In the case of A, this may seem surprising and wasteful of space, but in fact it is crucial to properly dealing with structural ambiguity. It guarantees that all parses will be found, independent of the order in which operations are performed. Whenever further inactive edges are added at this point the continued presence of A, together with the fundamental rule, insures that alternative extensions of A will be pursued as appropriate. A short example should make the workings of this principle clear. For the sake of simplicity, the grammar I will use in this and subsequent examples is an unadorned set of context free phrase structure rules, and the structures produced are simple constituent structure trees. Nonetheless as should be clear from what follows, the chart is equally useful for a wide range of grammutical formalisms, including phrase structure rules with features and ATNs. *In fact depending on formalism more than one new edge may be built - see below. 167 Figures 3a-3d show the parsing of "the man" by the rule "::P -> D N". In these figures, inactive edges are light lines below the row of verteces. Active edges are heavy lines above the row. Figure 3a simply shows the two inactive edges for the string with form-class information. Figure be. Figure 3b shows the addition of an empty active edge at the left hand end. We will discuss where it comes from in the next section. Its addition to the chart invokes the fundamental rule, with this edge being A and the edge for "the" being I. Figure 3b. NP:D N[] O[tho] N[man] The notation here for the active edges is the category sought, in this case NP, followed by a colon, followed by a list of the categories needed for extension/ completion, in this case D followed by N, followed by a bracketed list of sub-constituents, in this case empty. Since the first symbol of the extension specification of A matches the category of I, an new edge is created by the fundamental rule, as shown in Figure 3c. Figure 3c. NP I This edge represents a partially completed NP, still needing an N to complete, with a partial structure, lts addition co the chart invokes the fundamental rule again, this time with it as A and the "man" edge as I. Once again the extension condition is meet, and a new edge is constructed. This one is inactive however, as nothing more is required to complete it. Figure 3d. NP:D N[] NP:N[D.] NP['D N] The fundamental rule is invoked for the last time, back at the left hand end, because the empty NP edge (active) now meets the complete NP edge (inactive) for the first time, but nothing comes of this as D does not match NP, and so the process comes to a halt with the chart as shown in Figure 3d. The question of where the active edges come from is separate from the basic book-keeping of the fundamental principle. Different rule invocation strategies such as top-down, bottom-up, or left corner are reflected in different conditions for the introduction of empty active edges, different conditions for the introduction of empty active edges. For instance for a top-down invocation strategy, the following rule could be used: Top-down Strategy Rule Whenever an active edge is added to the chart, if the first symbol it needs to extend itself is a non- terminal, add an empty active edge at its right hand end for each rule in the gra-s~=r which expands the needed symbol. With this rule and the fundamental rule in operation, simply adding empty active edges for all rules expanding the distinguished symbol to the left hand end of the chart will provoke the parse. Successful parses are reflected in inactive edges of the correct category spanning the entire chart, once there is no more activity provoked by one or the other of the rules. Bottom-up invocation is equally straight-forward: Bottom-up Strategy Rule Whenever an inactive edge is added to the chart, for all the rules in the grammar whose expansion begins with the this edge's category, add an empty active edge at its left hand end. Note that while this rule is keyed off inactive edges the top-down rule was triggered by active edges being added. Bottom-up says "Who needs what just got built in order to get started", while top-down says "Who can help build what I need to carry on". Bottom-up is slightly simpler, as no additional action is needed to commence the parse beyond simply constructing the initial chart - the texically inspired inactive edges themselves get things moving. A%s~ note that if the grammars to be parsed are left- recursive, then both of these rules need redundancy checks of the form "and no such empty active edge is already in place" added to them. The question of search strategy is independent of the choice of rule invocation strategy. Whether the parse proceeds depth-first, breadth-first, or in some other manner is determined by the order in which edges are added to the chart, and the order in which active- inactive pairs are considered under the fundamental rule. A single action, such as the adding of an edge to the chart, may provoke multiple operations: a number of edge pairs to be processed by the fund=-,~ntal rule, and/ or a number of new edges to be added as a result of some rule invocation strategy. Last in first out processing of such multiple operations will give approximately depth-first behaviour, while first in first out will 8ire approximately breadth-first. More complex strat- egies, including semantically guided search, require more complicated queuing heuristics. The question of what gr~-~-tical formalism is employed is again largely independent of the questions of rule in- vocation and search strategy. St comes into play in two different ways. When the fundamental rule is invoked, it is the details of the particular gr=~-,tical formalism in use which determines the interpretation of the conditions for extension carried in the active edge. The result may be no new edges, if the conditions are not met; one new edge, if they are; or indeed more than one, if the inactive edge allows extension in more than 168 one way. This might be the case in an ATN style of grammar, where the active edge specifies its conditions for extension by way of reference to a particular state in the network, which may have more than one out-going arc which can be satisfied by the inactive edge concerned. The other point at which gra~natical formalism is involved is in rule invocation. Once a strategy is chosen, it still remains each time it is invoked to respond to the resulting queries, e.g. "Who needs what just got built in order to get started", in the case of a simple bottom-up strategy. Such a response clearly depends on the details of the gra--~t- ical formalism being employed. Underlying all this flexibility, and making it possible, is the fundamental rule, which ensures that no matter what formalism, search strategy, and rule invocation strategy* are used, every parse will eventually be found, and found only once. II. MCHART In the construction of MCHART, I was principly motivated by a desire to preserve what I see as the principal virtues of the chart parsing approach, namely the simplicity and power of its fundamental principle, and the clear separation it makes between issues of grammatical formalism, search strategy, and rule invocation strategy. This led to a carefully modularised program, whose structure reflects that separation. Where a choice has had to be made between clarity and efficiency, clarity has been preferred. This was done both in recognition of the system's expected role in teaching, and in the hopes that it can be easily adopted as the basis for many diverse investi- gations, with as few efficiency-motivated hidden biases as possible. The core of the system is quite small. It defines the data structures for edges and verteces, and organises the construction of the initial char~ and the printing of results. Three distinct interfaces are provided which the user manipulates to create the particular parser he wants: A signal table for determining rule invocation strategy; a functional interface for determining gr=-s, atical formalism; and a multi-level agenda for determining search strategy. The core system raises a signal whenever something happens to which a rule invocation strategy might be sensitive, namely the beginning and end of parsing, and the adding of active or inactive edges to the chart. To implement a particular strategy, the user specifies response to some or all of these. For example a bottom-up strategy would respond to the signal Adding ~nactiveEdge, but ignore the others; while a top-down strategy would need to respond to both AddingActiveEdge and StartParse. There is also a signal for each new active-inactive pair, to which the user may specify a response. Row- ever the system provides a default, which involves the afore-mentioned functional interface. To take advantage of this, the user must define two functions. The first, called ToExtend, when given an active edge and an inactive edge, must return a set of 'rules' which might be used to extend the one over the o~her. Taken together, an active edge, an inactive edge, and such a rule are called a configuration. The other function r.he user must define, called RunConfig, cakes a config- uration as argument and is responsible for implementing the fundamental principle, by building a new edge if the rule applies. For use here and in responses to signals, the system provides the function NewEdge, by which new edges may be handed over for addition to the chart. *Defective invocation strategies, which never invoke a needed rule, or invoke it more than once at the same place, can of course vitiate this guarantee. The system is embedded within a multi-level agenda mechanism. The adding of edges to the chart, the running of configurations, the raising of signals are all controllable by this mechanism. The user may specify what priority level each such action is to be queued at, and may also specify what ordering regime is to be applied to each queue. LIFO and FIFO are provided as default options by the system. Anything more complicated must be functionally specified by the user. More detailed specifications would be out of place in this context, but I hope enough has been said to give a good idea of how I have gone about implementing the chart in a clean and modular way. Hardcopy and/or machine-readable versions of the source code and a few illustrative examples of use are available atcost from me to those who are interested. The system is written in ELISP, a local superset of Rutgers Lisp which is very close to Interlisp. A strenuous effort has been made to produce a relatively dialect neutral, transparen~ implementation, end as the core system is only a few pages long, translation to other versions of Lisp should not be difficult. III. PSG Into the vacuum left by the degeneration into self- referential sterility of transformational-generative grau~ar have sprung a host of non-transformational gr*--,-tical theories. PSG, as developed by Gerald Gazdar and colleagues, is one of the most attractive of these. It combines a simplicity and elegance of formal apparatus with a demonstrably broad and arguably insightful coverage of English 8r---~tical phenomena (Gazdar 198Oa, 198Ob, forthcoming; Gazdar & Sag 1980; Gazdar, Pullum & Sag 1980; Gazdar, Klein, Pullum & Sag forthcoming). It starts with context-free phrase structure rules, with a two bar X-bar category system, under a node admissability interpretation. Four additional notational devices increase the expressive power of the formalism without changing its formal power - features, meta-rules, rule schemata, and cumpound categories. The addition of feature marking from a finite set to the category labels gives a large but finite inventory of node labels. Meta-rules are pattern-based rewrite rules which provide for the convenient expression of a class of syntactic regularities e.g. passive and subject-auxilliary inversion. They can be interpreted as inductive clauses in the definition of the grammar, saying effectively "For every rule in the grammar of such and such a form, add another of such and such a form". Provided it does not generate infinite sets of rules, such a device does not change the formal power of the system. Rule schemata are another notational convenience, which use variables over categories (and features) to express compactly a large (but finite) family of rules. For instance, the rule {S -> NP[PN x] VP[FN x]}*~ where PN is the person-number feature and x is a variable, is a compact expression of the requirement that subject and verb(-phrase) agree in person-number, and {x -> x and x} might be a simplified rule for handling conjunction. The final device in the system is a compounding of the category system, designed to capture facts about unbounded dependencies. This device augments the gr-,,~-r with a set of derived categories of the form x/y, for all categories x and y in the unaugmented graIEnar, together with a set of derived rules for expanding these 'slash' categories. Such a category can be interpreted as 'an x with a y **Here and subsequently I use old-style category labels as notational equivalents of their X-bar versions. 169 missing from it'. The expansions for such a category are all the expansions for x, with the '/y' applied to every element on the right hand sides thereof. Thus if {A -~ B C} & {A -> D}, then {A/C -> B/C C}, {A/C -> B C/C}, and {A/C -> D/C}. In addition x/x always expands, inter alia, to null. Given this addition to the gr=-,-=r, we can write rules like {NP -> NP. ~hat S/NP} for relative clauses. If we combine this device with variables over categories, we can write (over- simplified) rules like {S -> x S/x} for topicalization, and (x -> whatever x S/x} for free relatives. This approach to unbounded dependencies combines nicely with the rule schema given above for conjunction to account for the so-called 'across the board' deletion facts. This would claim that e.g. 'the man that Kim saw and Robin gave the book to' is OK because what is conjoined is two S/NPs, while e.g. 'the man that Kim saw and Robin gave the book to Leslie' is not OK because what is conjoined is an S/NP and an S, for which there is no rule. It is of course impossible to give a satisfactory s,,m-~ry of an entire formalism in such a short space, but I hope a sufficient impression will have been conveyed by the foregoing to make what follows intell- igible. The interested reader is referred to the references given above for a full description of PSG • by its author(s). IV. Parsing PSG using MCHAET What with rule schemata and mete-rules, a relatively small amount of linguistic work within the PSG frame- work can lead to a large number of rules. Mechanical assistance is clearly needed to help the linguist manage his gr~,~r, and to tell him what he's got at any given point. Al~hough I am not convinced there is any theoretical significance to the difference in formal complexity and power between context free gr=,--~rs and transbrmational gr=-~.=rs, the methodologic- al significance is clear and uncontestable. Computa- tional tools for manipulating context free gr=mm-rs are readily available and relatively well understood. On being introduced to PSG, and being impressed by its potential, it therefore seemed to me altogether appropriate to put the resources of computational linguistics at the service of the theoretical linguist. A Parser, and eventually a directed generator, for PSG would be of obvious use to the linguists working within its framework. Thus my goal in building a parser for PSG is to serve the linguist - ~o provide a tool which allows the expression and manipulation of the gr~mm~r in terms determined by the linguist for linguistic reasons. The goal is not an analogue or "functionally equivalent" system, but one which actually takes the linguists' rules and uses them to parse (and eventually generate). MCHART has proved to be an exceptionally effective basis for the construction of such a system. Its generality and flexibility have allowed me to implement the basic formal devices of PSG in a non ad-hoc way, which I hope will allow me to meet my goal of providing a system for linguists to use in' their day to day work, without requiring them ~o be wizard prograemers first. Of the four sspects of PSG discussed above, it is rule schemata and slash categories which are of most interest. I intend to handle mete-rules by simply closing the gr=mm=r under the meta-rules ahead of time. Feature checking is also straight-forward, and in what follows I will ignore features in the interests of simplicity of exposition. Let us first consider rule schemata. How are we to deal with a rule with a variable over categories? If we are following a ~op down rule invocation strategy, serious inefficiencies will result, whether the variable is on the left or right hand sides. A rule with a variable on the left hand side will be invoked by every active edge which needs a non-terminal to extend itself, and a variable on the right hand side of a rule will invoke every rule in the gr=---ar~ Fortunately, things are much better under a bottom up strategy. I had already determined to use a bottom up approach, because various characteristics of PSG strongly suggested that, with careful indexing of rules, this would mitigate somewhat the effect of having a very large number of rules.* Suppose that every rule schema begins with** at least one non-variable element, off which it which it can be indexed. Then at some point an active edge will be added to the chart, needing a variable category to be extended. If whenever the fundamental rule is applied to this edge and an inactive edge, this variable is instantiaEed throughout the rule as the category of that inactive edge, then the right thing will happen. The exact locus of implementation is the aforementioned function ToEx~end. To implement rule schemaEa, instead of simply extracting the rule from the active edge and returning it, it must first check to see if the right hand side of the rule begins with a variable. If so, it returns a copy of the rule, with the variable replaced by the category of the inactive edge throughout. In a bottom up context, ~his approach together with the fundamental rule means that all and only the plausible values for the variable will be tried. The following example should make this clear. Suppose we have a rule schema for english conjunction as follows: {x -> both x and x}#, and noun phrase rules including {NP -> Det N}, {NP -> Propn}, {Den -> NP 's}, where we assume that the possessive will get an edge of its own. Then this is a sketch of how "both Kim's and Robin's hats" would be parsed as an NP. Figure 4a shows a part of the chart, with the lexical edges, as well as three active edges. Fiaure ~a. *A very high proportion of PSG rules contain at least one (pre)terminal. The chart will support bi- directional processing, so running bottom up all such rules can be indexed off a preterminal, whether it is first on the right hand side or not. For example a rule like {NT -> NT pt NT} could be indexed off p~, first looking leftwards to find the first NT, then rightwards for ~he other. Preliminary results suggest that this approach will eliminate a great deal of wasted effort. **In fact given the hi-directional approach, as long as a non-variable is contained anywhere in the rule we are alright. If we assume that the root nature of ~opical- isation is reflected by the prese~in the schema given above of some beginning of sentence marker, ~hen this stipulation is true of all schemata proposed to date. #This rule is undoubtedly wrong. I am using it here and subsequently to have a rule which is indexed by its first element. The hi-directional approach obviates the necessity for this, but it would obscure the point I am trying to make to have to explain this in detail. 170 Edge 1 is completely empty, and was inserted because the conjunction rule was triggered bottom up off the word "both". Edge 2 follows from edge 1 by the fundamental rule. It is the crucial edge for what fo~lows, for ~he next thing it needs is a variable. Thus when it is added to the chart, and ToExtend is called on it and the Fropn edge, the rule returned is effectively {Fropn:Fropn and Propn [both]}, which is the result of substituting Propn for x throughout the rule in edge 2. This instantiated rule is immediately satisfied, leading to the addition of edge 3. No further progress will occur on this path, however, as edge 3 needs "and" to be extended. ,~o, \ /~/.ur~J ) 5 (,r.c.i Figure ~b. Figure 4B shows what happens when at some later point bottom up processing adds edge 4, since a Propn constit- utes an NP. Once again the fundamental rule will be invoked, and ToExtend will be called on edge 2 and this new NP edge. The resulting instantiated rule is {NP:NP and NP [both]}, which is immediately satisfied, resulting in edge 5. But this path is also futile, as again an "and" is required. oe-'W/rk~'~ 1 Figure 4c. Finally Figure 4c shows what happens when further bottom up invocation causes edge 6 to be built - a determiner composed of an NP and a possessive 's. Again the fundamental rule will call ToExtend, this time on edge 2 and this new Det edge. The resulting instantiated rule is {Det:Det and Det [both]}, which is immediately satisfied, resulting in edge 7. From this point it is clear sailing. The "and" will he consumed, and then "Robin's" as a determ/ner, with the end result being an inactive edge for a compound determiner spanning "both Kim's and Robin 's", which will in turn be incorporated into the con~plete NP. The way in which the fundamental rule, bottom up invoca- tion, and the generalised ToExtend interact to implement variables over categories is elegant and effective. Very little effort is wasted, in the example edges 3 and 5, but these might in fact be needed if the clause con- tinued in other ways. The particular value of this implementation is that it is not restricted to one part- icular rule schema. With this facility added, the grazmaar writer is free to add schemata to his gra"m~r, and the system will accommodate them without any addition- al effort. Slash categories are another matter. We could just treat them in the same way we do meta-rules. This would mean augmenting the grammar with all the rules formable under the principles described in the preceding section on PSG. Although this would probably work (there are some issues of ordering with respect to ordinary meta-rules which are not altogether clear to me), it would lead to considerable inefficiency given our bottom up assu~tion. The parsing of as simple a sentence as "Kim met Robin" would involve the useless invocation of many slash category expanding rules, and a ntanber of useless constituents would actually be found, including two S/NPs, and a VP/NP. What we would like to do is invoke these rules top down. After all, if there is a slash category in a parse, there must be a "linking" rule, such as the relative clause rule mention- ed above, which expands a non slash category in terms of inter alia a slash category. Once again we can assume that bottom up processing will eventually invoke this linking rule, and carry it forward until what is needed is the slash category. At this point we simply run top down on the slash category. MCHAKT allows us to implement this mixed initiative approach quite easily. In addition to responding to the AddinglnactiveEdge signal to implement the bottom up rule, we also field the AddingActiveEdge signal and act if and only if what is needed is a slash category. If it is we add active edges for just those rules generated by the slashing process for the particular slash category which is needed. In the particular case where what is needed is x/x for some category x, an e~ty inactive edge is built as well. For instance in parsing the NP "the song that Kim sang", once the relative dause rule gets to the point of needing an S/NP, various edges will be built, including one expanding S/NP as NP followed by VP/NP. This will consume "Ki~' as NP, and then be looking for VP/NP. This will in turn be handled top down, with an edge added looking for VP/NP as V followed by NP/NP among others. "sang" is the V, and NP/NP provokes top down action for the last time, this time simply building an e~ty inactive edge (aka trace). The nice thing about this approach is that it is simply additive. We take the system as it was, and without modifying anything already in place, simply add this extra capacity by responding to a previously ignored signal. Alas things aren't quite that simple. Our implementa- tions of rule schemata and slash categories each work fine independently. Unfortunately they do not combine effectively. NPs like "the song that both Robin wrote and Kim sang" will not be parsed. This is unfortunate indeed, as it was just to account for coordination facts with respect to slashed categories that these devices were incorporated into PSG in the form they have. The basic problem is that in our implementation of rule schemata, we made crucial use of the fact that everything ran bottom up, while in our implementation of slash categories we introduced some things which ran top down. The most straight-forward solution to the problem lies in the conditions for the top down invocation of rules expanding slash categories. We need to respond not just to overt slash categories, but also to variables. After all, somebody looking for x m/ght be looking for y/z, and so the slash category mechanism should respond to active edges needing variable categories a8 well as to those needing explicit slash categories. In that case all possible slash category expanding rules must be invoked. This is not wonderful, but it's not as bad as it might at first appear. Most variables in rule schemata are constrained to range over a limited set of categories. There are also constraints on what slash categories are actually possible. Thus relatively few schemata will actually invoke the full range of slash category rules, and the number of such rules will not be too great either. Although some effort will certainly he wasted, it will still be much less than would have been by the brute force method of simply including the 171 slash category rules in ~he Era,--~ar directly. One might hope to use the left context to further con- strain the expansion of variables to slash categories, but orderin E problems, as well as the fact that the linking rule may be arbitrarily far from the schema, as in e.g. "the son E that Rim wrote Leslie arranged Robin conducted and I sanE" limit the effectiveness of such an appro&ch. I trust this little exercise has illustrated well both the benefits and the drawbacks of a mixed initiative invocation strategy. It allows you to tailor the invocation of groups of rules in appropriate ways, but it does not guarantee that the result will not either under-parse, as in this case, or indeed over-parse. The solution in this case is a principled one, stemming as it does from an analysis of the mismatch of assumpt- ions between the bottom up and top down parts of the system. V. Conclusion So far I have been encouraged by the ease with which I have been able to implement the various PSG devices within the MCHART framework. Each such device has required a separate implementation, but taken together the result is fully general. Unless the PSG frame- work itself changes, no further progr=-ming is required. The linguist may now freely add, modify or remove rules, meta-rules, and schemata, and the system's behaviour will faithfully reflect these changes without further ado. And if details of the fra~aework do change, the effort involved to track them will be manageable, owin E to the modularity of the MCHAET implementation. I feel strongly that the use of a flexible and general base such as MCHART for the system, as opposed to custom building a PSG parser from scratch, has been very much worth while. The fact that the resulting system wears its structure on its sleeve, as it were, is easily explained and (I hope) understood, and easily adapted, more than offsets the possible loss of efficiency involved. The reinvention of the wheel is a sin whose denuncia- tions in this field are exceeded in number only by its instances. I am certainly no lees guilty than most in ~his regard. None the less I venture to hope that for many aspects of parsing, a certain amount of the work simply need not be redone any more. The basic concept ua~ framework of the chart parsing approach seems to me ideally suited as the startin E point for much of the discussion that goes on in the field. A wider recognition of this, and the wide adoption of, if not a particular program such as MCHART, which is too much to expect, then at least of the basic chart parsing approach, would improve co,~unications in the field tremendously, if nothing else. The direct comparison of results, the effective evaluation of claims about efficiency, degrees of (near) determinism, et~ would be so ,-,ch easier. The chart also provides to my mind a very useful tool in teaching, allowing as it does the exemplification of so many of the crucial issues within the same framework. Try it, you might like it. In the same polemical vein, I would also encourage more cooperation on projects of this sort between theoretical and computational linguists. Our technology can be of considerable assisiance in the enterprise of grammar development and evaluation. There are plenty of other non-transformational frameworks besides PSG which could use support similar to that which I am trying to provide. The benefit is not just to the linguist - • ith a little luck in a few years I should have the broadest coverage parser the world has yet seen, because all these l%nguists will ~ave been usiq8 my system to exten~ t~&pir ~r=-,-=r. Whether I will actually be able to make any use of the result is adm/ttedly less than clear, but after all, getting there is half the fun. VI. References Gazdar, G.J.M. (1980a) A cross-categorial semantics for coordination. Linguistics & Philosophy 3, 407-409. (1980b) Unbounded dependencies and co- ordinate structure. To appear in Linguistic In~uir~ ii. (1981) Phrase Structure Gr=mm-r. To appear in P. Jacobson and G.K. Pullum (eds.) The nature of s~ntactic representation. , G.K. Pull,--, & I. Sag (1980) A Phrase Structure Gr~=r of the English Auxiliar 7 System. To appear in F. Heny (ed.) Proceedings of the Fourth Gronin~en Round Table. , G.K. Pullum, I. Sag, & E.H. Klein (to appear) English Gray,tar. Kaplan, R.M. (1972) Augmented transition networks as psychological models of sentence comprehension. Artificial Intelli6ence 3, 77-1OO. (1973a) A General Syntactic Processor. In Rustin (ed.) Natural Language Processing. Algorith- mics Press, N.Y. (1973b) A multi-processin E approach to natural language. In Proceedings of the first National Computer Conference. AFIPS Press, Montvale, N.J. Kay, M. (1973) The MIND System. In Eustin (ed.) Natural Language Processing. Algorithmics Press, N.Y. (1977) Morphological and syntactic analysis. In A. Zampolli (ed.) S~rntactic Structures Processing. North Holland. (1980) Algorithm Schemata and Data Structures in Syntactic Processing. To appear in the proceedings of the Nobel Symposium on Text Processin E 1980. Also CSL-80-12, Xerox PAEC, Palo Alto, CA.
1981
36
PERFORMANCE COMPARISON OF COMPONENT ALGORITHMS FOR THE PHONEMICIZATION OF ORTHOGRAPHY Jared Bernstein Larry Nessly Telesensory Speech Systems University of North Carolina Palo Alto, CA 94304 Chapel Hill, NC 27514 A system for converting English text into synthetic speech can be divided into two processes that operate in series: I) a text-to-phoneme converter, and 2) a phonemic-input speech synthesizer. The conversion of orthographic text into a phonemic form may itself comprise several processes in series, for instance, formatting text to expand abbreviations and non-alphabetic expressions, parsing and word class determination, segmental phonemicization of words, word and clause level stress assignment, word internal and word boundary allophonic adjustments, and duration and fundamental frequency settings for phonological units. Comparing the accuracy of different algorithms for text-to-phoneme conversion is often difficult because authors measure and report system performance in incommensurable ways. Furthermore, comparison of the output speech from two complete systems may not always provide a good test of the performance of the corresponding component algorithms in the two systems, because radical performance differences in other components can obscure small differences in the components of interest. The only reported direct comparison of two complete text-to-speech systems (MITALK and TSI's TTS-X) was conducted by Bernsteln and Pisonl (1980). This paper reports one study that compared two algorithms for automatic segmental phonemlcization of words, and a second study that compared two algorithms for automatic assignment of lexical stress. Only three systems for text-to-phoneme conversion have been reported in detail: McIlroy's (197~) Votrax driver, Hunnicutt,s (1976) rules for the MITALK system, and the NRL rules developed by Elovitz and associates (1976). Liberman (1979), Hertz (1981), and Hunnicutt (1980) have described more recent systems, but have not published rule sets. One fairly standard approach to automatic phonemiczation of words has the following component parts: input: "whoever" I LEXICON AFFIX STRIPPER I LETTER TO SOUND I LEXICAL STRESS l ALLOPHONICS output: /huw~v3/~" Several research systems are of this general design, including Allen's MITALK system, the TTS-X prototype at Telesensory Systems, and Llberman,s proper name phonemicizer. The most popular text-to-phoneme desi@n is the NRL approach, which has only two components, of which only the first is presented in detail and evaluated by Elovitz. The original NRL system is: input: "word" LETTER TO SOUND including some whole morphemes I LEXICAL STRESS output: lw3d/ d 19 The very great advantage of the MRL approach, is the unified treatment ofletter sequences, affixes, and whole words. There is exactly one pass through a word, left to right, in which the maximum string starting with the leftmost unphonemicized character is matched. These strings are sometimes whole words, sometimes affixes, and sometimes consonant or vowel sequences or word fragments like "BUIL". The main constraint of the system is its greatest attraction: the unity and simplicity of the code that scans the word and accesses a single table of letter strings. In contrast to this, the MITALK system, for instance, has one module and associated table structure for lexlcal decomposition of whole words, another module for stripping common affixes, and a third module for translating consonant and vowel sequences that remain in the pseudo-root of the word. STUDY ONE Study One reports a comparison of two routines for translating orthographic letters into segmental phonemes: Hunnicutt@TSI and NRL@DEC. Hunnicutt@TSI is the affix stripper and letter to sound rules as dlacribed in AJCL Microfiche 57, and implemented in MACRO-11 in Telesensory Systems' TTS-X prototype text-to-speech system. Hunnlcutt's system was modified only slightly in translation, and about 20 rules were added. The system starts from the right end of the word and identifies as many suffixes as it can from a table of about 140 suffixes, proceeding toward the beginning of the word until either the remainder (pseudo-root) of the word has no vowel or fewer than three letters, or no more suffixes can be matched. Next, a similar proceedure works from the beginning of the word, matching as many prefixes as it can from • a table of about 40 prefixes. Finally, the pseudo-root of the word is scanned left to right twice, once translating the consonants, and next translating the vowels. NRL@DEC is a system implemented by Martin Minnow at Digital Equtptment Corp. The whole system is somewhat more elaborate that the original NRL system, but the letter to sound module and its mode of operation are basically as described by Elovitz et alla, with 20 or 30 rules added. The NRL rules include about 60 very common whole words, as well as about 25 rules that handle various environments for three prefixes and fifteen suffixes. A set of 865 words was processed both by the Hunnlcutt@TSI affix stripper and letter to sound rules, and by the NRL@DEC letter to sound rules including the affix rules and the word fragments. The 865 words comprised approximately every fiftieth word of the Brown Corpus (Kucera & Francis, 1967) in frequency order, starting from about the 400th most frequent word: "position." The lexicon of the TSI system was disabled, and none of the whole words in the NRL rules was in the set of 865. Since the output from both subsystems was tapped before stress assignment, vowel reduction, and any allophonics were performed, the criterion of correctness was "does this phonemlcization represent any acceptable pronunciation of the spelled word, assuming one can assign stress correctly and then reduce vowels ~ppropriately." Thus, a phonemlcization consistent wlth any possible word class for that spelling, or any 'regular' regional pronunciation was to be accepted. Three judges (two phonetlcians and a phonologist) were given printed copies of the two resulting phonemic transcriptions; both were in fairly transparent broad phonemic form. The judges chose among three possible responses to each word: 1 = correct; .5 = close or questionable; and 0 = wrong. Cross judge consistency can be seen from the bimodal distribution of summed scores in Figure I. Fla I E t Fll4~C ,~fm,~,. £ o, .¢ ,LJ m 7b b7 o s-t. 6OI 137 0 .~F- t. t.5" ~t --~..W z.,/¥ I ! 3 20 Another, more diagnostic way to view the results is to present the number of words that fall into each cell of a 2X2 grid formed by the Hunnlcutt@TSl rating vs. the NRL@DEC rating, as shown in Figure 2. Figure 2 omits the 26 words that had a summed score of 1.5 for either of the two letter to sound systems. FIGURE 2 <1.5 Hunnicutt@TSI >1.5 HRL@DEC <1.5 >1.5 [a ]b 127 I 90 l 69 ) 553 If the rule sets were equivalent, the grid would have zeroes in cells b and c. If one rule set were a super-set of the other, you would get a zero in cell b or cell c, but not both. Most of the 553 words in cell d are regular, or else are common exceptions (like "built"). Most of the 127 words in cell a are obviously exceptional (e.g. "minute, honor, one, two"). Examination of the 159 words distributed between cells b and c yields the payoff. Of the 69 words that Hunnicutt@TSI got right and NRL@DEC missed, nearly half are correct by virtue of the extensive affix stripping in Hunnicutt's algorithm. Among these 69 words in cell c are "mobile, naval, wallace, likened, coworkers, & reenacted." Of the 90 words that NRL@DEC got right and Hunnicutt@TSl got wrong, only about 15 are definitely due to NRL's word fragment rules. Six of the 90 words are in cell d just because NRL does not Strip suffixes the way that Hunnicutt's rules do. These six words are "november, visited, preferably, presidency, september, & oven." In general, both algorithms get about 25% wrong on this lexically flat sample of 865 word types. About 15~ of the words are incorectly phonemicized by both subsystems. This might suggest that 15~ wrong may be a state of the art performance level for segmental phonemicization of word types by sets of 400 rules. STUDY TWO Study Two compared the performance of two algorithms for assignment of lexical stress to words. Both of the algorithms were coded in MACRO-It and ran in different versions of TSl's TTS-X prototype text-to-speech system. The first algorithm is Hunnlcutt's lexical stresser, which is described in detail in AjCL Microfiche 57. Hunnicutt's algorithm is an adaptation of Halle's cyclic stress rules for English. The adaptations include adjustments for the less specified input to the rules (e.g. the part of speech of the root is unknown), and the number of stress levels specified in the output is reduced, presumably because the Klatt synthesizer it was designed to drive only used two stress levels. Hunnicutt also added stress rules that depended on the occurance of certain classes of suffixes. Hunnicutt's rules require several pointers and a suffix table, they sometimes pass through a word several times in the manner of Chomsky & Halle's (1968) rules, and they occupy about 3K bytes of executable code in their TSI version. The second algorithm is a simplified version of a stress rule proposed in Hill & Nessly (1973). We will refer to this rule as Nessly's default, since it is the default case of Nessly's full stress algorithm. Nessly's default stress is quite similar to Latin stress and to the "first approximation" stress rule discussed twoard the beginning of Chomsky & Halle's chapter three (1968, pp.69-77). The main differences between Nessly's default rule and Chomsky & Halle's "first approximation" are: (I) No word class information is used in Nessly's default, so verbs are stressed as nouns. and (2) What constitutes a "strong cluster" (which contains a tense vowel or a closed syllable end) is different. Nessly's default is indifferent to vowel length or tensity. Nessly's default rule can be outlined as follows: If(number of syllables : I) stress it. if(number of syllables : 2) stress left syllable. else skip the last syllable. • if(next-to-last is closed) stress it. else stress third from last. (place alternating 2nd stresses on syllables to the left.) The MACRO-It version of this rule requires about 150 bytes of executable code, and accepts one pointer to the last vowel in the word. It passes through the word once, right to left, and it does very well assigning correct stresses (in caps) to "LUminant" vs. "maLIGnant," for example. For testing the stress algorithms, a sample of 430 words was selected. These 430 words were all the items of five or more characters that had frequencies of 40 ppm through 34 ppm (inclusive) in the Brown corpus. The segmental phonemicization was done by Hunnicutt's rules in TSI's TTS-X prototype. The automatically produced segmental phonemicizations that the stress algorithms operated on were rejected only if they did not have the correct number of syllables. Thirteen of the 430 words were phonemicized with the wrong number of syllables. Another 54, or 13~, of the 430 were one syllable words, which were allways assigned correct 21 stress. Stress assignments were judged by the first author. The results on the remaining 417 words of the sample were: Correct Wrong Hunnicutt/Halle 308 109 Nessly default 303 114 So, on these words, the two algorithms perform at about the same level of accuracy, which is about 252 wrong on a lexlcal sample. DISCUSSION In both studies, very simple algorithms performed about as well as algorithms of vastly greater complexity. In the case of the letter-to-sound algorithms (Hunnlcutt@TSI and MRL@DEC), the difference in complexity is primarily in the procedure for checklnK the rules against the word. Hunnicutt's rules themselves are only a little more complicated than the NRL rules. Presumably, with some modification, most of Hunnicutt's rules could be modified to run within a one-pass NRL procedure. The stress algorithms tested in Study Two present a very great contrast in both number of rules and procedure for rule application. If Nessly's default rule is llke a simplified version of Chomsky & Halle's "first approximation" stress rule, and l[ Hunnlcutt's algorithm is fairly close to Chomsky & Halle's full lexical stress rules (with noun-root assumed), then our data suggest that the epicyclic accretion that produced Chomsky & Halle's full set of stress rules from their "first approximation" has gained almost nothing in lexical coverage. We have reported performance in terms of percent wrong on samples of word types from the Brown corpus. It seems that an appropriate measure of performance that reflects what people feel when they hear a text-to-speech system is AVERAGE WORDS BETWEEN ERRORS (AWBE). We would like to end this paper by giving AWBE for a simple text-to-phoneme system with a 25~ error rate in both letter-to-sound conversion and lexlcai stressing, and a lexicon with 1500 words. If the lexicon is in parallel with the letter to sound and stress rules, and the performance of the letter to sound rules and the stress rules are independant, an overall error rate of about 7% can be expected. This would translate into an AWBE of 13.3. REFERENCES J.Bernstein & D.Pisoni (1980) "Unlimited text-to-speech system: Description and evaluation of a microprocessor based device," IEEE ICASSP-80 Proceedings. N.Chomsky & M.Halle (1968) THE SOUND PATTERN OF ENGLISH, Harper-Row, New York. H°Elovltz, R.Johnson, A.McHuKh, & J.Shore (1976) "Letter-to-sound rules for automatic translation of English text to phonetics," IEEE Trans. on Acoustics, Speech, and Signal Processing, vol. ASSP-24, no. 6. S.Hertz (1981) "SRS letter to sound rules," IEEE ICASSP-80 Proceedings. S.Hunnicutt (1976) "Phonological rules for a text-to-speech system" AJCL Microfiche 57. $.Hunnicutt (1980) "Grapheme to phoneme rules: a review" KTH SLT-QPSR 2-3/1980, Stockholm. H.Kucera & W.Francls (1967) COMPUTATIONAL ANALYSIS OF PRESENT DAY AMERICAN ENGLISH, Brown U. Press, Providence. M.Llberman (1979) "Text-to-speech conversion "by rule and a practical application," Proceedings of the Ninth International Congress of Phonetic Sciences, Copenhagen. M.McIlroy (197~) "Synthetic English speech by rule," Bell Telephone Laboratories Memo. K.Hlll & L.Nessly (1973) "Review of The Sound Patten of English," LINGUISTICS 106: 57-101. ACKNOWLEDGEMENTS The authors gratefully acknowlege valuable help from Martin Minow, Peter MaKEs, Margaret Kahn, and oulie Lovin=. 22
1981
4
PHONY: A Heuristic Phonological Analyzer* Lee A. Becket Indiana University DOMAIN AND TASK PHONY is a program to do phonological analysis. Within the generative model of grammar the function of the phonological component is to assign a phonetic representation to an utterance by modifying the underlying representations (URs) of its constituent morphemes. Morphemes are the minimal meaning units of language, i.e. the smallest units in the expression system which can be correlated with any part of the content system, e.g. un+tir+ing+ly. URs are abstract entities which contain the idiosyncratic information about pronounciations of morphemes. (1) PHONOLOGICAL Underlying COMPONENT Phonetic Representations ............ > Representations (URs) (rules) Phonological analysis attempts to determine the nature of the URs and to discover the general principles or rules that relate them to the phonetic representations. (2) URs Pronounciations PHONY (phonological anal Rules The input to PHONY are pronounciations of words and phrases upon which a preliminary morphological analysis has been completed. They have been divided into morphemes, and different instances of the same morpheme have been associated. These are represented as strings of phonetic symbols including morpheme- and word-boundaries. Indices are used to associate various instances of the same morpheme. (3) # i s a r a p # # 1 s a r a b + 2 d a # # 1 s a r a v + 3 u # # 1 s a rav + 4 e # # 5 a d + 6 a # # 5 a t # ,,, The output of PHONY is a set of phonological rules or regularities in the data, as well as a set of 'underlying representations' for the morphemes. The phonological rules generate the various pronounciations of the morphemes from their underlying representations. *This research was supported in part by National Science Foundation grant number MCS 81-02291. REPRESENTATION In Generative Phonology sounds are represented as matrices of feature specifications, the phonetic symbols being a shorthand for these matrices. (4) - syllabic + consonanta~ - continuant + voice - nasal + anterior + coronal The set of 'distinctive features' proposed by Chomsky and Halle [2] were claimed to be sufficient to distinguish the sounds in any language. Further these features were all claimed to have two values; the feature was either present or absent. There has been a fair aunount of dispute about the specific features, and several additional ones have been proposed, e.g. gravity, advanced tongue root. There has also been considerable dispute about whether the features are all binary. Nevertheless most phonologists use the original binary features, often with a few additional ones. Phonological rules are operations upon sets of these feature matrices by which feature specifications are assigned to the matrix when it appears in a certain context. The rule expressed (in shorthand) normally as (e) S -> S /ji (read s becomes s in position immediately before i) would be expressed as follows using feature matrices. (7) E coronal anterio l syllabi anterior I ~ high 2/-" high I strident ~ back J The representation provides a language in which to express hypotheses. The task is to find statements in this language to express the data. Thus the representation implicitly defines the search space. The search space is restricted by the following constraint on the 'distance' between a UR and its pronounciations. Every feature specification in the UR must be present in a 'corresponding' segment in at least one of the phonetic forms. Consider, for example, morpheme i from (3) above: it ham three pronounciations [sarap], [sarab], [sarav]. 23 This constraint restricts its possible URs to /sarap/, /sarah/, /sarav/, /saraf/. Even If] does not appear in any of the pronouciations of this morpheme, its +continuant specification occurs in Iv] and its -voice specification occurs in [p]; its other feature specifications are common to [p], Cb], Iv]. This constraint is weaker than the "strong alternation condition" (cf. [4]), which would restrict the final UR segment to be /p/, /b/, or /V/o The term "alternation" will be important of the discussion below; here [p] vs. [b] vs. Iv] is an alternation. THE PROBLEM OF MULTIPLE SOLUTIONS It should be pointed out that most often several sets of combinations of underlying representations and phonological rules can be used to derive the same pronounciations. This could happen in several ways. It could be unclear what the UR is, and different URs together winh different rules could derive that same pronounciatons, i.e. the directionality of the rule could be unclear. Consider morpheme 5 from (3) above: (8) Pronounciations: #ad÷a# #at# Solution I: UR /ad/ & Rule d -, t / # Solution 2: UR /at/ & Rule t -> d / a a The symbol # represents a word boundary, and the symbol + represents a morpheme boundary, The difference in the pronounciation of the last segment of this morpheme, d vs. t, is called an alternation. Given this alternation, one could make two hypotheses. One could hypothesize that the UR is /ad/ and that there is a rule which changes d to t when it occurs at the end of a word, or one could hypothesize that the UR is /at/ and that there is a rule which changes t to d between a's. Also some phenomena could be explained by a single more general rule or by several more specific rules. Generally, there are two approaches that could be taken to deal with the problem of multiple possible solutions. One could attempt to impose restrictions on what could constitute a valid solution, or one could use an evaluation procedure to decide in cases of multiple possible solutions. One could also use both of these approaches; in which case the more restriction, the less evaluation is necessary. An original single evaluation criterion - 'simplicity', as manifested in the number of feature specifications used - has not proved workable. ALso no particular proposed restrictions have been embraced by the v~st majority of phonologists. Individual phonologists are generally guided in their evaluations of solutions, i.e. sets of rules and URs, by various criteria. The weighting of these criteria is left open. In this connection the 'codifying function' of the development of expert systems is particulary relevant, i.e. in order to be put into a program the criteria must be formalized and weighted.j5] Although it has sometimes been claimed that no set of discovery procedures can be sufficient tO produce phonological analyses, this program is intended to demonstrate the feasibility of a procedural definition of the theory. The three most widely used criteria and the manner in which they are embedded in PHONY will now be discussed. Phonological Predictability This involves the preference of solutions based phonological environment rather than to those in which reference is made to morphological or lexical categories or involving the division of the lexicon into arbitrary classes. In other words, in doing phonological analysis the categories or meanings of morphemes will not be considered, unless no solution can be found based on just the sounds or sound sequences involved. This criterion is embodied in PHONY, since no information about morpholog- ical or syntactic categories is available to PHONY. If PHONY cannot handle an alternation by reference to phonological environment, it will return that this is an 'interesting case'. The ability to identify the *interesting cases' is a most valuable one, since these are often the cases that lead to theory modification. It should be mentioned that PHONY could readily be extended (Extension I) to handle a certain range of syntactically or morphologically triggered phonological rules. This would involve including in the input information about syntactic category, and, where relevant, morphological category of the constituent morphemes. This informaton would be ignored unless PHONY was unable to produce a solution, i.e. would have returned "interesting cases"'. It would then search for generalizations based on these categories. Naturalness This involves the use of knoweldge about which proceeses are 'natural' to decide between alternate solutions, i.e. solutions involving natural processes are preferred. A process found in many languages is judged to be 'natural'. Although natural processes are often phonetically plausible, this is not always the case. It should be mentioned that not only is 'naturalness' an arbiter in case of several possible solutions, but it is also a heuristic to lead the investigator to plausible hypotheses which he can pursue. PHONY contains a catalogue of natural processes. When an alternation looks as if it might be the result of one of these processes, the entire input corpus of strings is tested to see.if this hypothesis is valid. Simplicity 'Simplicity' was mentioned above, while it is no longer the only criterion, it is still a primary one. It is reflected in PHONY in a series of attempts to make rules more general, i.e. combine several hypothesized rules into a single hypothesized rule. The more general rules require fewer feature specifications. Also the smaller number of 24 rules can lead to a reduced number of feature specifications. The various proposed constraints on what can be valid solutions generally would correlate with the differences in the testing process of PHONY. Most of these involve differences in allowable orderings of rules (e.g. 'unrestricted extrinsic ordering', 'free reapplication', 'direct mapping'; cf. [3]). At present PHONY's testing process involves checking if hypothesized rules hold, i.e. do not have counterexemples, in the phonetic representations (such a criterion disallows opacity of type l; of. [4]). PHONY could be extended (Extension 2) to allow the user to choose from several of the proposed constraints. This would involve using different testing functions. This extension would allow analyses of the same data under different constraints to easily be compared. Additionally, new constraints could be added and tested. STRUCTURE OF PHONY PHONY can be divided into three major parts~ ALTFINDER, NATMATCH, and RULERED. ALTFINDER ALTFINDER takes the input sting of phonetic symbols and indices indicating instances of the same morpheme, as in (3), and returns for each morpheme in turn a representation including the non-alternating segments and list of alternations with the contexts in which each alternant occurs, for example, for morpheme I, as in (9). (9) sara p ~ b -~ v # sarap # # sarah + da # # sarav + u # # sarav ÷ e # This process involves comparing in turn each instance of a given key morpheme with the current hypothesized underlying representation for that morpheme, and for each case of alternation storing in N groups the different context strings in which the N alternants occur. The comparison is complicated by the common processes of epenthesis (insertion of a segment) and elision (deletion of a segment), and occasionally by the much more rarely occurring methathesis (interchange in the positions of two segments). These processes are illustrated in (10). (10) Given UR / t a r i s k /, Epenthesis ~ -> a [trisk][tarisak] would .~nv°Ive Elision a -> [tariks] " Methathesis sk -> ks Therefore in cases where the segments being compared are not identical it is necessary to ascertain whether they are variants of a single underlying segment or one of these processes has applied. The possibilities are illustrated in (11). (ii) Given two pronounciations of the same morpheme [ A B C . . . ] where A is associated with D [ D E F . . . ] and B is not identical to E, There are four possible relationships: Bi c... A\B\cl " ' " D E F ... D E F ... A B C ... A B C ... The criteria used to decide between these relationships are (a) degree of similarity in each of the conceivable associations, and (b) a measure of the similarity of the rest of the strings for each of the conceivable associations. ALTFINDER yields a list of alternations based on segments, as in (9). This is then converted into a list of alternations based on features. (12) P p-contexts b v b-contexts v-contexts ,U, VOICE ÷ b-contexts & v-contexts p-contexts CONTINUANT + v-contexts b-contexts & p-contexts Since every one of the alternations in the former must differ by at least one feature, the new list must contain as many alternations and normally contains more alternations, Where previously for each alternation in a segment there was a list of strings where each alternant occurred, now for each alternation in a feature there are two lists - one with the strings where a positive value for that feature occurred and the other where a negative value occurred. It should be noted that the elements of these lists, i.e. strings, together with the feature alternating, its value, and an indication of which segment in the string contains the feature, are all potentially rules. They bear the same information as standard phonological rules. Compare the representations in (13); these are for the alternations in morpheme 5 in (3). 25 (13) # a d + a # # a t # i I I 1 0 I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 l 0 0 O 1 0 0 0 0 0 1 0 0 1 0 0 l 0 0 l 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 l 0 0 1 0 0 0 1 0 0 1 0 0 i 0 0 0 0 1 0 0 0 0 0 I O" 0 0 1 0 0 0 0 0 1 0 0 1 i 0 1 0 VOICE 0 I 0 0 0 i 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 to the rules t -> d / # a + a # d -> t / # a # , i.e. respectively, one can't pronounce t in the environment # a + a # but rather must pronounce d, and one can't pronounce d in the environment # a # but rather must pronounce t. The latter rule and the second representation (both without the initial two segments - in the interests of space) in (13) are juxtaposed in (14). (14) 1000011000000 1000000000000000 D-> T/ # It is often the case that one or both of these potential 'rules' will be valid, i.e. would be generalizations that would hold over the pronounciations represented in the input. These 'rules' would, however, be much less general than those which are found in phonological analyses. It is assumed that speaker/hearer/language learners can and do generalize from these specific cases to form more general rules. If this were not the case how could speakers correctly pronounce morphemes in new environments. Within the theory the criterion of simplicity is sensitive to these generalizations in that such generalizations reduce the number of feature specifi- cations. Within PHONY the preference for more general rules is manifested by continually trying to generate and test more general rules resulting from the coalescing or combining of two or more specific rules. Recall that the representation of the segments involved a feature matrix with positive or negative specifications for each feature. In order to generate more general rules this repuesentation is modified to two matrices for each segment - one representing those features which must be positive in the environment and the other for those features which must be negative. The generalization process involves taking the 'greatest common denominator' (GCD) of the positive and negative values of the segments of the environments of two separate 'rules'. In the interests of space an abbreviated example of the GCD operation is given in (15). (15) + . ÷ -- ÷ ÷ - + -- SYLL i 0 0 1 i 0 0 i 1 0 VOICE i 0 l 0 1 0 0 i i 0 HIGH 0 1 1 0 l 0 h 1 0 i 0 / + -- ÷ - ~voIcEI VOICEHIGH 01 00 11 00 ~ [-S~L]-'C÷HIGH]/ ~HIGH] m ~ The GCD operation has generated a more general rule. If the original two rules are a manifestation of a more general rule, the generalized rule must not involve or make reference to the the initial segment of the former rule. Notice also that in the GCD the VOICE feature does not have to be positive or negative; if the two original rules are a manifestation of a single rule the specification of the VOICE feature in the alternating segment must not be relevant. NATMATCH After the alternations in terms of segments that were output by ALTFINDER have been changed into alternations in terms of features (12) and after these have been transformed from single matrices into double matrices, the resulting "rules" are sent to NATMATCH. NATMATCH compares these "rules" with the data base of common phonological processes. This involves pattern matching. If a match occurs the entire input corpus is tested to find out if it can be established whether this rule or constraint is valid for this language. If Extension 2 were implemented, this testing process would differ for the different versions of the theory. If the validity can be established, the underlying representations for the morpheme is adjusted and the rule is added to the list of established rules. Common processes in the data base are organized by the feature which is alternating, and among those processes involving the alternation of a given feature the most common process is listed and thus tested first. If it can be shown to be valid, it is added to a list of established rules. It should be mentioned that ALTFINDER makes use of this list, and if an alternation that it discovers can be handled by an established rule, the tentative underlying representation is so adjusted and the alternation need not be passed on to the rest of the program. If within NATMATCH no matches are found in the data base or if the validity of the matches cannot be established, the alternation is added to the list of those as yet not accounted for. RULERED RULERED takes the generated "rules" that have not been established. It establishes which of these are valid and takes GCDs to generalize these as much as possible. This is done by going through all the rules involving a certain feature and generating the minimal number of equivalence classes of "rules" and combined (GCDed) "rules" which 26 are valid. The resulting generalized rules have the largest matrices, i.e. the largest set of feature specification@, which all the forms undergoing these rules have in common. However, the elimination of some of these features specification might still result in valid rules. The rules with minimal matrices, i.e. minimal number of feature specifications (recall the "simplicity" criterion), might be termed lowest commmon denominators (LCDs). These are produced by attempting in turn to eliminate each segment in GCDed rule; the new rule is generated and tested, and if valid the segment is out, otherwise it remains. Then an attempt is made to eliminate in turn each feature specification in the remaining segments, again generate and test. Finally, all the established rules are combined, where possible, according to the many abbreviatory conventions of Generative Phonology (cf. [2]). This is done on the basis of the formal properties of the rules. For example, if two generated rules are identical except that one has an additional segment not present in the other, these can be into a single rule; parentheses allow the inclusion of optional segments in the environment of a rule. In addition, all the rules generated above involve a change of only a single feature specification. If there are several rules which are identical except that a different feature specification is changed, i.e. the two changes occur in the same environment, they can be combined into a single rule: in this particular environment both specifications change. DISCUSSION PHONY is a learning program. It is discovering the general principles or rules governing pronounciation in a language. As such it can be said to be learning some aspect of a language. PHONY can be thought of either independently or as a part of a larger system designed to learn a language. In the latter context PHONY could help in deciding between ambiguous morphological divisions. In addition, PHONY could be used in adjusting, fine-tuning heuristics for a morphological analyzer. PHONY would act as a "critic" in such a system (cf. [i]). Two sets of heuristics might lead to different morphological analyses, which might each be input to PHONY~ if one input lead to analysis that had no "interesting cases", i.e. problems, while the other did, the set of heuristics leading to the former analysis would be supported. Independently PHONY is an expert system. It provides a procedural definition of phonological theory. Because of this, it could be useful to someone desiring to learn phonological theory. It could also be of use to working phonologists. In addition to producing the analyses, it also isolates the 'interesting cases', e.g. morphologically triggered rules. With Extension i it could also be used to compare various versions of the theory and to test the the effects of new modifications of the theory. It should be emphasized that at present PHONY is ~ bare program. It is hoped that it is sufficient to demonstrate the feasability and worth of the endeavor. It presents a basic approach: contexts in with alternating segments are transformed into hypothesized "rules", these can be combined via the GCD operation, further simplified to LCDs, and then again combined according to the abbreviatory conventions. There is a "grinding" quality to this process. Phonologists only resort to a similar grind, when all their heuristics have led to deadends. The only heuristic presently incorporated in PHONY is the comparison to a list of natural processes; this allows a tremendous shortcut in the search More heuristics obviously could be added to PHONY. It would also be possible for a METAPHONY to find heuristics to be to be used by PHONY. (Possible decision criteria to be used in evaluating differing sets of heuristics could be the number of tests of the input corpusand the number of "interesting cases".) These heuristics could improve efficiency of PHONY by obviating much of the "grinding" process. At the same time METAPHONY could also be making discoveries about phonologies of natural languages in general. For example, in the process of generating LCDs instead of going segment by segment and feature by feature, METAPHONY could acquire and incorporate in PHONY knOwledge about what aspects of pronounciation are not/rarely pertinent to rules affecting a certain feature. REFERENCES i. Buchanan, B.G., T.M. Mitchell, R.G. Smitch, C.R. Johnson, Jr. 1979. Models of learning systems. Encyclopedia of Computer Science and Technology. J. Belzer, A. Holtzman, A. Kent (Eds.). New York: Marcel Dekker, Inc. Vol 3, pp 24-51. 2. Chomsky, N. and M. Halle. 1968. The Sound Pattern of English. New York: Harper and Row. 3. Kenstowicz, M. and C. Kisseberth. 1977. Topics in Phonological Theory. New York: Academic Press. 4. Kiparsky, P. 1968. How abstract is phonology? In O. Fujimura (Ed.), Three Dimensions in Linguistic Theory. 1973. Tokyo: TEC. 5. Michie. D. 1980. Knowledge-based systems. UIUCDCSR-80-1001 and UILU-Eng 80-1704 (University of Illinois). 27
1981
5
EVALUATION OF NATURAL LANGUAGE INTERFACES TO DATABASE SYSTEMS: A PANEL DISCUSSION Norman K. Sondheimer, Chair Sperry Univac Blue Bell, PA For a natural language access to database system to be practical it must achieve a good match between the capabilities of the user and the requirements of the task. The user brings his own natural language and his own style of interaction to the system. The task brings the questions that must be answered and the database domaln+s semantics. All natural language access systems achieve some degree of success. But to make progress as a field, we need to be able to evaluate the degree of this success. For too long, the best we have menaged has been to produce a list of typical questions or linguistic phenomena that a system correctly processed. Missing has been a discussion of their importance and a similar list of unhandled phenomena. Only occasionally were even informal evaluations of systems conducted. Recently, this has begun to change. In the last several years, many of the current generation of natural language access to database systems have been subject to laboratory or field testing. These include INTELLECT, LADDER, PLANES, REL and TQA. We have begun to discover what a user will ask a syste m, how he reacts to its limits, and where we need further work. This panel brings together a good sampling of the people involved in these tests including Indlv iduals intimately involved with the above systems. The position papers that follow present ~helt unique viewpoints on the important issues in the evaluation of natural language access to database systems. These include • 2. What has been learned about a) user needs, b) system's capabilities and c) their ~atch with respect to tasks. Under this, what are the most important linguistic phenomena to allow for? What other kinds of interactions, beside retrievals, do users request? How good are systems at satisfying users? How good are users at finding ways to use systems? How satisfied are users with systems* performance? How does these results vary with respect to tasks? II. What have we learned about running evaluations? Under this, what methodologies are capable of revealing what sorts of facts? What are the limits of field studies versus controlled experiments? Bow good are studies with a simulated system, such as Malhotra*s with its human intermedlary[l]? What are the independent variables that must be allowed for? What tools are available to determine user bias and experience beforehand, and user satisfaction afterward? IIl. On the basis of these evaluations, what should the future look like for natural language access to database? Under this point, what niches look most promising for natural language interfaces? What standards should he set for natural language systems performance? What kinds of evaluations should be run in the future? How should they be designed and how should they be judged? In addition to the position papers that follow, I strongly urge you to consult the panelist more extensive publications. Bibliography [I] Malhotra, A., =Design Criterla for a Knowledge-Based English Language System for Management: An Experlmental Analysis", Ph.D. Thesis, MIT, MAC TR-146, 1975. 29
1981
6